SDK for building langchain agents using resources from Alita
Project description
Alita SDK
Alita SDK, built on top of Langchain, enables the creation of intelligent agents within the Alita Platform using project-specific prompts and data sources. This SDK is designed for developers looking to integrate advanced AI capabilities into their projects with ease.
Prerequisites
Before you begin, ensure you have the following requirements met:
- Python 3.10+
- An active deployment of Project Alita
- Access to personal project
Installation
It is recommended to use a Python virtual environment to avoid dependency conflicts and keep your environment isolated.
1. Create and activate a virtual environment
For Unix/macOS:
python3 -m venv .venv
source .venv/bin/activate
For Windows:
python -m venv .venv
venv\Scripts\activate
2. Install dependencies
Install all required dependencies for the SDK and toolkits:
pip install -U '.[all]'
Environment Setup
Before running your Alita agents, set up your environment variables. Create a .env file in the root directory of your project and include your Project Alita credentials:
DEPLOYMENT_URL=<your_deployment_url>
API_KEY=<your_api_key>
PROJECT_ID=<your_project_id>
NOTE: these variables can be grabbed from your Elitea platform configuration page.
Custom .env File Location
By default, the CLI looks for .env files in the following order:
.alita/.env(recommended).envin the current directory
You can override this by setting the ALITA_ENV_FILE environment variable:
export ALITA_ENV_FILE=/path/to/your/.env
alita-cli agent chat
Using the CLI for Interactive Chat
The Alita SDK includes a powerful CLI for interactive agent chat sessions.
Starting a Chat Session
# Interactive selection (shows all available agents + direct chat option)
alita-cli agent chat
# Chat with a specific local agent
alita-cli agent chat .alita/agents/my-agent.agent.md
# Chat with a platform agent
alita-cli agent chat my-agent-name
Direct Chat Mode (No Agent)
You can start a chat session directly with the LLM without any agent configuration:
alita-cli agent chat
# Select option 1: "Direct chat with model (no agent)"
This is useful for quick interactions or testing without setting up an agent.
Chat Commands
During a chat session, you can use the following commands:
| Command | Description |
|---|---|
/help |
Show all available commands |
/model |
Switch to a different model (preserves chat history) |
/add_mcp |
Add an MCP server from your local mcp.json (preserves chat history) |
/add_toolkit |
Add a toolkit from $ALITA_DIR/tools (preserves chat history) |
/clear |
Clear conversation history |
/history |
Show conversation history |
/save |
Save conversation to file |
exit |
End conversation |
Enhanced Input Features
The chat interface includes readline-based input enhancements:
| Feature | Key/Action |
|---|---|
| Tab completion | Press Tab to autocomplete commands (e.g., /mo → /model) |
| Command history | ↑ / ↓ arrows to navigate through previous messages |
| Cursor movement | ← / → arrows to move within the current line |
| Start of line | Ctrl+A jumps to the beginning of the line |
| End of line | Ctrl+E jumps to the end of the line |
| Delete word | Ctrl+W deletes the word before cursor |
| Clear line | Ctrl+U clears from cursor to beginning of line |
Dynamic Model Switching
Use /model to switch models on the fly:
> /model
🔧 Select a model:
# Model Type
1 gpt-4o openai
2 gpt-4o-mini openai
3 claude-3-sonnet anthropic
Select model number: 1
✓ Selected: gpt-4o
╭──────────────────────────────────────────────────────────────╮
│ ℹ Model switched to gpt-4o. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
Adding MCP Servers Dynamically
Use /add_mcp to add MCP servers during a chat session. Servers are loaded from your local mcp.json file (typically at .alita/mcp.json):
> /add_mcp
🔌 Select an MCP server to add:
# Server Type Command/URL
1 playwright stdio npx @playwright/mcp@latest
2 filesystem stdio npx @anthropic/mcp-fs
Select MCP server number: 1
✓ Selected: playwright
╭──────────────────────────────────────────────────────────────╮
│ ℹ Added MCP: playwright. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
Adding Toolkits Dynamically
Use /add_toolkit to add toolkits from your $ALITA_DIR/tools directory (default: .alita/tools):
> /add_toolkit
🧰 Select a toolkit to add:
# Toolkit Type File
1 jira jira jira-config.json
2 github github github-config.json
Select toolkit number: 1
✓ Selected: jira
╭──────────────────────────────────────────────────────────────╮
│ ℹ Added toolkit: jira. Agent state reset, chat history │
│ preserved. │
╰──────────────────────────────────────────────────────────────╯
Using SDK with Streamlit for Local Development
To use the SDK with Streamlit for local development, follow these steps:
-
Ensure you have Streamlit installed:
pip install streamlit
-
Run the Streamlit app:
streamlit run alita_local.py
Note: If streamlit throws an error related to pytorch, add this --server.fileWatcherType none extra arguments.
Sometimes it tries to index pytorch modules, and since they are C modules it raises an exception.
Example of launch configuration for Streamlit:
Important: Make sure to set the correct path to your .env file and streamlit.
Streamlit Web Application
The Alita SDK includes a Streamlit web application that provides a user-friendly interface for interacting with Alita agents. This application is powered by the streamlit.py module included in the SDK.
Key Features
- Agent Management: Load and interact with agents created in the Alita Platform
- Authentication: Easily connect to your Alita/Elitea deployment using your credentials
- Chat Interface: User-friendly chat interface for communicating with your agents
- Toolkit Integration: Add and configure toolkits for your agents
- Session Management: Maintain conversation history and thread state
Using the Web Application
-
Authentication:
- Navigate to the "Alita Settings" tab in the sidebar
- Enter your deployment URL, API key, and project ID
- Click "Login" to authenticate with the Alita Platform
-
Loading an Agent:
- After authentication, you'll see a list of available agents
- Select an agent from the dropdown menu
- Specify a version name (default: 'latest')
- Optionally, select an agent type and add custom tools
- Click "Load Agent" to initialize the agent
-
Interacting with the Agent:
- Use the chat input at the bottom of the screen to send messages to the agent
- The agent's responses will appear in the chat window
- Your conversation history is maintained until you clear it
-
Clearing Data:
- Use the "Clear Chat" button to reset the conversation history
- Use the "Clear Config" button to reset toolkit configurations
This web application simplifies the process of testing and interacting with your Alita agents, making development and debugging more efficient.
Using Elitea toolkits and tools with Streamlit for Local Development
Actually, toolkits are part of the Alita SDK (alita-sdk/tools), so you can use them in your local development environment as well.
To debug it, you can use the alita_local.py file, which is a Streamlit application that allows you
to interact with your agents and toolkits by setting the breakpoints in the code of corresponding tool.
Example of agent's debugging with Streamlit:
Assume we try to debug the user's agent called Questionnaire with the Confluence toolkit and get_pages_with_label method.
Pre-requisites:
- Make sure you have set correct variables in your
.envfile - Set the breakpoints in the
alita_sdk/tools/confluence/api_wrapper.pyfile, in theget_pages_with_labelmethod
-
Run the Streamlit app (using debug):
streamlit run alita_local.py
-
Login into the application with your credentials (populated from .env file)
- Enter your deployment URL, API key, and project ID (optionally)
- Click "Login" to authenticate with the Alita Platform
-
Select
Questionnaireagent -
Query the agent with the required prompt:
get pages with label `ai-mb` -
Debug the agent's code:
- The Streamlit app will call the
get_pages_with_labelmethod of theConfluencetoolkit - The execution will stop at the breakpoint you set in the
alita_sdk/tools/confluence/api_wrapper.pyfile - You can inspect variables, step through the code, and analyze the flow of execution
- The Streamlit app will call the
How to create a new toolkit
The toolkit is a collection of pre-built tools and functionalities designed to simplify the development of AI agents. These toolkits provide developers with the necessary resources, such as APIs, data connectors to required services and systems. As an initial step, you have to decide on its capabilities to design required tools and its args schema. Example of the Testrail toolkit's capabilities:
get_test_cases: Retrieve test cases from Testrailget_test_runs: Retrieve test runs from Testrailget_test_plans: Retrieve test plans from Testrailcreate_test_case: Create a new test case in Testrail- etc.
General Steps to Create a Toolkit
1. Create the Toolkit package
Create a new package under alita_sdk/tools/ for your toolkit, e.g., alita_sdk/tools/mytoolkit/.
2. Implement the API Wrapper
Create an api_wrapper.py file in your toolkit directory. This file should:
- Define a config class (subclassing
BaseToolApiWrapper). - Implement methods for each tool/action you want to implement.
- Provide a
get_available_tools()method that returns tools' metadata and argument schemas.
Note:
- args schema should be defined using Pydantic models, which will help in validating the input parameters for each tool.
- make sure tools descriptions are clear and concise, as they will be used by LLM to define on tool's execution chain.
- clearly define the input parameters for each tool, as they will be used by LLM to generate the correct input for the tool and whether it is required or optional (refer to https://docs.pydantic.dev/2.2/migration/#required-optional-and-nullable-fields if needed).
Example:
# alita_sdk/tools/mytoolkit/api_wrapper.py
from ...elitea_base import BaseToolApiWrapper
from pydantic import create_model, Field
class MyToolkitConfig(BaseToolApiWrapper):
# Define config fields (e.g., API keys, endpoints)
api_key: str
def do_something(self, param1: str):
"""Perform an action with param1."""
# Implement your logic here
return {"result": f"Did something with {param1}"}
def get_available_tools(self):
return [
{
"name": "do_something",
"ref": self.do_something,
"description": self.do_something.__doc__,
"args_schema": create_model(
"DoSomethingModel",
param1=(str, Field(description="Parameter 1"))
),
}
]
3. Implement the Toolkit Configuration Class
Create an __init__.py file in your toolkit directory. This file should:
- Define a
toolkit_config_schema()static method for toolkit's configuration (this data is used for toolkit configuration card rendering on UI). - Implement a
get_tools(tool)method to grab toolkit's configuration parameters based on the configuration on UI. - Implement a
get_toolkit()class method to instantiate tools. - Return a list of tool instances via
get_tools(). Example:
# alita_sdk/tools/mytoolkit/__init__.py
from pydantic import BaseModel, Field, create_model
from langchain_core.tools import BaseToolkit, BaseTool
from .api_wrapper import MyToolkitConfig
from ...base.tool import BaseAction
name = "mytoolkit"
def get_tools(tool):
return MyToolkit().get_toolkit(
selected_tools=tool['settings'].get('selected_tools', []),
url=tool['settings']['url'],
password=tool['settings'].get('password', None),
email=tool['settings'].get('email', None),
toolkit_name=tool.get('toolkit_name')
).get_tools()
class MyToolkit(BaseToolkit):
tools: list[BaseTool] = []
@staticmethod
def toolkit_config_schema() -> BaseModel:
return create_model(
name,
url=(str, Field(title="Base URL", description="Base URL for the API")),
email=(str, Field(title="Email", description="Email for authentication", default=None)),
password=(str, Field(title="Password", description="Password for authentication", default=None)),
selected_tools=(list[str], Field(title="Selected Tools", description="List of tools to enable", default=[])),
)
@classmethod
def get_toolkit(cls, selected_tools=None, toolkit_name=None, **kwargs):
config = MyToolkitConfig(**kwargs)
available_tools = config.get_available_tools()
tools = []
for tool in available_tools:
if selected_tools and tool["name"] not in selected_tools:
continue
tools.append(BaseAction(
api_wrapper=config,
name=tool["name"],
description=tool["description"],
args_schema=tool["args_schema"]
))
return cls(tools=tools)
def get_tools(self) -> list[BaseTool]:
return self.tools
4. Add the Toolkit to the SDK
Update the __init__.py file in the alita_sdk/tools/ directory to include your new toolkit:
# alita_sdk/tools/__init__.py
def get_tools(tools_list, alita: 'AlitaClient', llm: 'LLMLikeObject', *args, **kwargs):
...
# add your toolkit here with proper type
elif tool['type'] == 'mytoolkittype':
tools.extend(get_mytoolkit(tool))
# add toolkit's config schema
def get_toolkits():
return [
...,
MyToolkit.toolkit_config_schema(),
]
5. Test Your Toolkit
To test your toolkit, you can use the Streamlit application (alita_local.py) to load and interact with your toolkit.
- Login to the platform
- Select
Toolkit testingtab - Choose your toolkit from the dropdown menu.
- Adjust the configuration parameters as needed, and then test the tools by sending queries to them.
NOTE: use function mode for testing of required tool.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file alita_sdk-0.4.7.tar.gz.
File metadata
- Download URL: alita_sdk-0.4.7.tar.gz
- Upload date:
- Size: 1.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a0ddb86bd5dc53b6d861bc24b606e496920fca4a261e7d1d6ff8a226a5b59124
|
|
| MD5 |
8da0eaec299d7019cd9c50d8ac38390e
|
|
| BLAKE2b-256 |
3be9e097e525be3b9c08fefd4dd5e30f8492f49c72f00a95fedbd1e74e4d9535
|
Provenance
The following attestation bundles were made for alita_sdk-0.4.7.tar.gz:
Publisher:
pypi.yml on ProjectAlita/alita-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alita_sdk-0.4.7.tar.gz -
Subject digest:
a0ddb86bd5dc53b6d861bc24b606e496920fca4a261e7d1d6ff8a226a5b59124 - Sigstore transparency entry: 1003128493
- Sigstore integration time:
-
Permalink:
ProjectAlita/alita-sdk@0b3822102c534e1b9f50e0cd0e3db453b9e18f10 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ProjectAlita
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@0b3822102c534e1b9f50e0cd0e3db453b9e18f10 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file alita_sdk-0.4.7-py3-none-any.whl.
File metadata
- Download URL: alita_sdk-0.4.7-py3-none-any.whl
- Upload date:
- Size: 2.5 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
58c7d5247522ebbdb87491a0f7700fd1a42a69610406436a52c42dace9f3e7a0
|
|
| MD5 |
23c456f73d73c054bc397cc1d0bd8e50
|
|
| BLAKE2b-256 |
a0f28466cfce40fed417373d35172824bc4d124c25587928acac71e551afb321
|
Provenance
The following attestation bundles were made for alita_sdk-0.4.7-py3-none-any.whl:
Publisher:
pypi.yml on ProjectAlita/alita-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
alita_sdk-0.4.7-py3-none-any.whl -
Subject digest:
58c7d5247522ebbdb87491a0f7700fd1a42a69610406436a52c42dace9f3e7a0 - Sigstore transparency entry: 1003128554
- Sigstore integration time:
-
Permalink:
ProjectAlita/alita-sdk@0b3822102c534e1b9f50e0cd0e3db453b9e18f10 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/ProjectAlita
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi.yml@0b3822102c534e1b9f50e0cd0e3db453b9e18f10 -
Trigger Event:
workflow_dispatch
-
Statement type: