Integration package for AgentBay with LangChain
Project description
AgentBay <> LangChain Integration
Installation
It's recommended to create a virtual environment before installing the package:
python3 -m venv agentbay_langchain_env
source ./agentbay_langchain_env/bin/activate
To ensure you have the latest version of pip, first run:
pip install --upgrade pip
To install the package, run:
pip install -U langchain-agentbay-integration==0.2.2 wuying-agentbay-sdk==0.15.0
You'll also need to install LangChain and an LLM provider package. For example, to use with OpenAI:
pip install langchain==1.0.3 langchain-openai==1.0.1
Setup
You need to configure credentials by setting the following environment variables:
API Keys Setup
AgentBay API Key:
- Visit Agent-Bay Console
- Sign up or log in to your Alibaba Cloud account
- Navigate to the Service Management section
- Create a new API KEY or select an existing one
- Copy the API Key and set it as the value of
AGENTBAY_API_KEYenvironment variable
Note: AgentBay account needs to have Pro or higher subscription level to access the full functionality.
DashScope API Key:
- Visit DashScope Platform
- Sign up or log in to your account
- Navigate to the API Key management section
- Copy the API Key and set it as the value of
DASHSCOPE_API_KEYenvironment variable
export AGENTBAY_API_KEY="your-agentbay-api-key"
export DASHSCOPE_API_KEY="your-dashscope-api-key"
AgentBay Integration Toolkit
The AgentBay Integration Toolkits provide comprehensive sets of tools for interacting with the AgentBay cloud computing platform. Each toolkit is designed for a specific environment and use case.
Mobile Automation (MobileIntegrationToolkit)
Designed for Android mobile device automation tasks.
Instantiation
Create an AgentBay session for mobile operations:
import os
from agentbay import AgentBay, CreateSessionParams
from langchain_agentbay_integration.toolkits import MobileIntegrationToolkit
from langchain_agentbay_integration.tools import SessionData
from dataclasses import dataclass
# Create AgentBay session for mobile operations
agent_bay = AgentBay()
params = CreateSessionParams(image_id="mobile_latest")
result = agent_bay.create(params)
session = result.session
# Initialize the toolkit for mobile operations
toolkit = MobileIntegrationToolkit()
tools = toolkit.get_tools()
# Define context class for passing session data
@dataclass
class AgentContext:
"""You can add other fields as needed, but must include session_data field"""
session_data: SessionData = None # Direct session object
Tools
The MobileIntegrationToolkit includes the following tools:
-
mobile_tap: Tap on the mobile screen at specific coordinates
- Input:
x(int, required) - X coordinate of the tap position,y(int, required) - Y coordinate of the tap position - Output: JSON with
success(bool),message(str),x(int),y(int)
- Input:
-
mobile_swipe: Swipe on the mobile screen from one point to another
- Input:
start_x(int, required) - Starting X coordinate,start_y(int, required) - Starting Y coordinate,end_x(int, required) - Ending X coordinate,end_y(int, required) - Ending Y coordinate,duration_ms(int, optional, default=300) - Duration of the swipe in milliseconds - Output: JSON with
success(bool),message(str),start_x(int),start_y(int),end_x(int),end_y(int),duration_ms(int)
- Input:
-
mobile_input_text: Input text into the active field on mobile
- Input:
text(str, required) - Text to input - Output: JSON with
success(bool),message(str),text(str)
- Input:
-
mobile_send_key: Send a key event to the mobile device (e.g., HOME, BACK)
- Input:
key_code(int, required) - Key code to send. Common codes: HOME=3, BACK=4, VOLUME_UP=24, VOLUME_DOWN=25, POWER=26, MENU=82 - Output: JSON with
success(bool),message(str),key_code(int),key_name(str)
- Input:
-
mobile_get_ui_elements: Get all UI elements on the current mobile screen
- Input:
timeout_ms(int, optional, default=2000) - Timeout in milliseconds to wait for UI elements - Output: JSON with
success(bool),message(str),elements(list),timeout_ms(int)
- Input:
-
mobile_screenshot: Take a screenshot of the current mobile screen
- Input:
file_path(str, required) - File path to save the screenshot - Output: JSON with
success(bool),message(str),screenshot_url(str),file_path(str)
- Input:
-
mobile_wait: Wait for a specified amount of time in milliseconds
- Input:
milliseconds(int, required) - Time to wait in milliseconds - Output: JSON with
success(bool),message(str)
- Input:
Use within an agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
# Initialize LLM
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model=os.getenv("DASHSCOPE_MODEL", "qwen3-max")
)
# Create agent using the new create_agent method from LangChain v1.0
agent = create_agent(
llm,
tools=tools,
context_schema=AgentContext, # Add context schema for agent identification
system_prompt="""You are a helpful assistant with access to AgentBay mobile tools that can automate Android mobile operations.
Use these tools to help the user accomplish their mobile automation tasks. When using coordinates, be specific about where you want to tap or swipe.
For key events, use these common codes:
- HOME: 3
- BACK: 4
- VOLUME_UP: 24
- VOLUME_DOWN: 25
- POWER: 26
- MENU: 82
Example workflows:
1. To open an app: Get UI elements, find the app icon coordinates, then tap on those coordinates
2. To fill a form: Tap on input fields, then use input_text to enter data
3. To navigate: Use swipe gestures to scroll or change screens
4. To take screenshots: Use screenshot tool to capture the current screen state
5. To wait for operations: Use wait tool to pause execution for a specified time"""
)
# Prepare context with session data
session_data = SessionData()
session_data.mobile_session = session
context = AgentContext(session_data=session_data)
# Example usage
example_query = """
图片保存到下面这个文件夹中:./snapshots/mobile/
0. 截图命名为'0_mobile_home_page.png'
1. 获取所有UI元素,点击【应用宝】图标,等待3秒钟浏览器打开,截图命名为'1_mobile_yyb_first_page.png;
1.1 获取所有UI元素,点击【同意】,等待3秒,截图保存为1.1_mobile_yyb_agree_page.png;
1.2 获取所有UI元素,点击右上角的弹窗关闭按钮,等待3秒,截图保存为1.2_mobile_yyb_close_batch_install.png'
2. 获取所有UI元素,点击【在这里搜索内容】文本框,输入12306, 等待3秒钟完成页面跳转,截图命名为'2_mobile_yyb_after_search.png'
2.1. 获取所有UI元素,点击12306软件的【下载】按钮,等待3秒完成下载,截图命名为'2.1_mobile_yyb_after_download.png'
2.2. 获取所有UI元素,点击'打开高速网络的'【取消】按钮,等待3秒,截图命名为'2.2_mobile_yyb_cancle_network.png'
3.获取所有UI元素,点击12306软件的【安装】按钮,截图命名为'3_mobile_yyb_first_click_install.png'
3.1 获取所有UI元素,点击【设置】同意安装,截图命名为'3.1_mobile_aggree_install.png',
3.2 获取所有UI元素,点击【安装】,等待5秒安装完成,截图命名为'3.2_mobile_yyb_after_install.png'
4. 获取所有UI元素,点击【打开】,等待5秒,截图命名为'4_mobile_12306_first_page.png'
"""
result = agent.invoke(
{"messages": [{"role": "user", "content": example_query}]},
context=context,
config={"recursion_limit": 500}
)
# Extract and print the final output
if "messages" in result and len(result["messages"]) > 0:
final_message = result["messages"][-1]
if hasattr(final_message, 'content') and final_message.content:
print(f"Result: {final_message.content}")
else:
print(f"Result: {final_message}")
else:
print(f"Result: {result}")
Computer Automation (ComputerIntegrationToolkit)
Designed for desktop/laptop computer automation tasks.
Instantiation
Create an AgentBay session for computer operations:
import os
from agentbay import AgentBay, CreateSessionParams
from langchain_agentbay_integration.toolkits import ComputerIntegrationToolkit
from langchain_agentbay_integration.tools import SessionData
from dataclasses import dataclass
# Create AgentBay session for computer operations
agent_bay = AgentBay()
# Use "linux_latest" image for a full Linux desktop environment with GUI support
params = CreateSessionParams(image_id="linux_latest")
result = agent_bay.create(params)
session = result.session
# Initialize the toolkit for computer operations
toolkit = ComputerIntegrationToolkit()
tools = toolkit.get_tools()
# Define context class for passing session data
@dataclass
class AgentContext:
"""You can add other fields as needed, but must include session_data field"""
session_data: SessionData = None # Direct session object
Tools
The ComputerIntegrationToolkit includes the following tools:
-
computer_click_mouse: Click the mouse at specified coordinates with the specified button
- Input:
x(int, required) - X coordinate for mouse click,y(int, required) - Y coordinate for mouse click,button(str, optional, default="left") - Mouse button to click. Options: 'left', 'right', 'middle', 'double_left' - Output: JSON with
success(bool),message(str),x(int),y(int),button(str)
- Input:
-
computer_move_mouse: Move the mouse to specified coordinates
- Input:
x(int, required) - Target X coordinate for mouse movement,y(int, required) - Target Y coordinate for mouse movement - Output: JSON with
success(bool),message(str),x(int),y(int)
- Input:
-
computer_drag_mouse: Drag the mouse from one point to another
- Input:
from_x(int, required) - Starting X coordinate for drag operation,from_y(int, required) - Starting Y coordinate for drag operation,to_x(int, required) - Ending X coordinate for drag operation,to_y(int, required) - Ending Y coordinate for drag operation,button(str, optional, default="left") - Mouse button to use for dragging. Options: 'left', 'right', 'middle' - Output: JSON with
success(bool),message(str),from_x(int),from_y(int),to_x(int),to_y(int),button(str)
- Input:
-
computer_scroll: Scroll the mouse wheel at specified coordinates
- Input:
x(int, required) - X coordinate for scroll operation,y(int, required) - Y coordinate for scroll operation,direction(str, optional, default="up") - Scroll direction. Options: 'up', 'down', 'left', 'right',amount(int, optional, default=1) - Scroll amount - Output: JSON with
success(bool),message(str),x(int),y(int),direction(str),amount(int)
- Input:
-
computer_get_cursor_position: Get the current cursor position
- Input: None
- Output: JSON with
success(bool),message(str),x(int),y(int)
-
computer_input_text: Input text into the active field
- Input:
text(str, required) - Text to input - Output: JSON with
success(bool),message(str),text(str)
- Input:
-
computer_press_keys: Press the specified keys
- Input:
keys(List[str], required) - List of keys to press (e.g., ['Ctrl', 'a']),hold(bool, optional, default=False) - Whether to hold the keys - Output: JSON with
success(bool),message(str),keys(List[str]),hold(bool)
- Input:
-
computer_screenshot: Take a screenshot of the current screen
- Input:
file_path(str, required) - File path to save the screenshot - Output: JSON with
success(bool),message(str),screenshot_url(str),file_path(str)
- Input:
-
computer_ocr_elements: Analyze a screenshot and identify all interactive UI elements with text
- Input:
image_url(str, required) - URL of the screenshot image to analyze - Output: JSON with
success(bool),message(str),result(str)
- Input:
-
computer_vlm_analysis: Analyze an image using a vision language model
- Input:
image_url(str, required) - URL of the image to analyze,prompt(str, required) - Custom prompt for the vision language model - Output: JSON with
success(bool),message(str),result(str)
- Input:
-
computer_wait: Wait for a specified amount of time in milliseconds
- Input:
milliseconds(int, required) - Time to wait in milliseconds - Output: JSON with
success(bool),message(str)
- Input:
-
computer_get_screen_size: Get the screen size and DPI scaling factor
- Input: None
- Output: JSON with
success(bool),message(str),width(int),height(int),dpiScalingFactor(float)
Use within an agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
# Initialize LLM
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model=os.getenv("DASHSCOPE_MODEL", "qwen3-max")
)
# Create agent using the new create_agent method from LangChain v1.0
agent = create_agent(
llm,
tools=tools,
context_schema=AgentContext, # Add context schema for agent identification
system_prompt="""You are a helpful assistant that can control a desktop environment.
You can take screenshots and analyze them to identify interactive UI elements with text and their coordinates.
Use the tools provided to perform desktop automation tasks.
When asked to analyze UI elements, first take a screenshot, then use the OCR tool to analyze it.
Available tools:
1. computer_click_mouse - Click the mouse at specified coordinates with the specified button
2. computer_move_mouse - Move the mouse to specified coordinates
3. computer_drag_mouse - Drag the mouse from one point to another
4. computer_scroll - Scroll the mouse wheel at specified coordinates
5. computer_get_cursor_position - Get the current cursor position
6. computer_input_text - Input text into the active field
7. computer_press_keys - Press the specified keys (e.g., ['Ctrl', 'a'] or ['Enter'])
8. computer_screenshot - Take a screenshot of the current screen and save it to a file
9. computer_ocr_elements - Analyze a screenshot to identify all interactive UI elements with text and their coordinates
10. computer_vlm_analysis - Analyze an image using a vision language model with a custom prompt
11. computer_wait - Wait for a specified amount of time in milliseconds before continuing
12. computer_get_screen_size - Get the screen resolution (width and height) and DPI scaling factor"""
)
# Prepare context with session data
session_data = SessionData()
session_data.computer_session = session
context = AgentContext(session_data=session_data)
# Example usage
example_query = """
图片保存到下面这个文件夹中:./snapshots/computer/linux/
按照如下流程调用合适的工具完成任务
1. 鼠标左键双击火狐浏览器坐标(17, 61) ,等待3秒,截图保存到'0_linux_click_browser.png'
2. 使用computer_ocr_elements工具,获取浏览器地址框中心坐标(该文本框一般在标签页下方,正式页面上方的中间位置,其内部有Search or enter address这样的文本), 鼠标双击浏览器地址框中心坐标 ,输入文本【https://cn.bing.com/search?q=杭州天气】,点击回车,等待5秒,等待跳转完成,截图保存到'1_linux_hangzhou_weather_search.png'
3. 使用computer_ocr_elements工具,获取搜索结果的第一条的点击坐标,鼠标单击第一条的点击坐标,等待3秒,截图保存到'2_linux_hangzhou_weather.png'。
4. 使用computer_vlm_analysis工具,分析包含天气的搜索截图,总结一下杭州从今天开始未来几天的天气,给出一个总结,截图保存到'3_linux_hangzhou_weather_final.png。
"""
result = agent.invoke(
{"messages": [{"role": "user", "content": example_query}]},
context=context,
config={"recursion_limit": 500}
)
# Extract and print the final output
if "messages" in result and len(result["messages"]) > 0:
final_message = result["messages"][-1]
if hasattr(final_message, 'content') and final_message.content:
print(f"Result: {final_message.content}")
else:
print(f"Result: {final_message}")
else:
print(f"Result: {result}")
Code Operations (CodespaceIntegrationToolkit)
Designed for code execution and file operations in a cloud codespace.
Instantiation
Create an AgentBay session for codespace operations:
import os
from agentbay import AgentBay, CreateSessionParams
from langchain_agentbay_integration.toolkits import CodespaceIntegrationToolkit
from langchain_agentbay_integration.tools import SessionData
from dataclasses import dataclass
# Create AgentBay session for codespace operations
agent_bay = AgentBay()
params = CreateSessionParams(image_id="code_latest")
result = agent_bay.create(params)
session = result.session
# Initialize the toolkit for codespace operations
toolkit = CodespaceIntegrationToolkit()
tools = toolkit.get_tools()
# Define context class for passing session data
@dataclass
class AgentContext:
"""You can add other fields as needed, but must include session_data field"""
session_data: SessionData = None # Direct session object
Tools
The CodespaceIntegrationToolkit includes the following tools:
-
codespace_write_file: Write content to a file in the codespace
- Input:
path(str, required) - Path where to write the file,content(str, required) - Content to write to the file,mode(str, optional, default="overwrite") - Write mode ('overwrite' or 'append') - Output: JSON with
success(bool),message(str)
- Input:
-
codespace_read_file: Read content from a file in the codespace
- Input:
path(str, required) - Path of the file to read - Output: JSON with
success(bool),message(str),content(str)
- Input:
-
codespace_run_code: Execute code in the codespace
- Input:
code(str, required) - The code to execute,language(str, required) - The programming language of the code. Supported languages are: 'python', 'javascript',timeout_s(int, optional, default=60) - The timeout for the code execution in seconds - Output: JSON with
success(bool),message(str),result(str),request_id(str)
- Input:
-
codespace_execute_command: Execute a shell command in the codespace
- Input:
command(str, required) - Shell command to execute,timeout_ms(int, optional, default=1000) - Timeout for command execution in milliseconds - Output: JSON with
success(bool),message(str),output(str)
- Input:
Use within an agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
# Initialize LLM
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model=os.getenv("DASHSCOPE_MODEL", "qwen3-max")
)
# Create agent using the new create_agent method from LangChain v1.0
agent = create_agent(
llm,
tools=tools,
context_schema=AgentContext, # Add context schema for agent identification
system_prompt="""You are a helpful assistant with access to AgentBay codespace tools that can automate code operations.
Available tools:
1. codespace_write_file - Write content to a file in the codespace
2. codespace_read_file - Read content from a file in the codespace
3. codespace_run_code - Execute code in the codespace
4. codespace_execute_command - Execute a shell command in the codespace
Use these tools to help the user accomplish their code automation tasks.
Example workflows:
1. To create and run a Python script: Write the script to a file, then run it with run_code
2. To check directory contents: Use execute_command with 'ls' command
3. To read a file: Use read_file tool
4. To create multiple files: Use write_file tool multiple times
When using write_file, you can specify the mode parameter to either overwrite (default) or append to a file.
When appending content, make sure to include newline characters if needed to separate lines."""
)
# Prepare context with session data
session_data = SessionData()
session_data.codespace_session = session
context = AgentContext(session_data=session_data)
# Example usage
example_query = """Write a Python file '/tmp/script.py' with content 'print("Hello from Python!")
print("AgentBay integration successful!")
' using default mode.
Then run the Python code in that file using the run_code tool.
Next, write a file '/tmp/demo.txt' with content 'First line
' using default mode.
Then append a second line 'Second line
' to the same file using append mode.
After that, read the file '/tmp/demo.txt' to verify its content.
Finally, execute command 'cat /tmp/demo.txt' to show the file content."""
result = agent.invoke(
{"messages": [{"role": "user", "content": example_query}]},
context=context,
config={"recursion_limit": 500}
)
# Extract and print the final output
if "messages" in result and len(result["messages"]) > 0:
final_message = result["messages"][-1]
if hasattr(final_message, 'content') and final_message.content:
print(f"Result: {final_message.content}")
else:
print(f"Result: {final_message}")
else:
print(f"Result: {result}")
Browser Automation (BrowserIntegrationToolkit)
Designed for web browser automation tasks.
Instantiation
Create an AgentBay session for browser operations:
import os
from agentbay import AgentBay, CreateSessionParams
from langchain_agentbay_integration.toolkits import BrowserIntegrationToolkit
from langchain_agentbay_integration.tools import SessionData
from dataclasses import dataclass
# Create AgentBay session for browser operations
agent_bay = AgentBay()
params = CreateSessionParams(image_id="browser_latest")
result = agent_bay.create(params)
session = result.session
# Initialize the toolkit for browser operations
toolkit = BrowserIntegrationToolkit()
toolkit.initialize_browser(session) # Initialize browser with the session
tools = toolkit.get_tools()
# Define context class for passing session data
@dataclass
class AgentContext:
"""You can add other fields as needed, but must include session_data field"""
session_data: SessionData = None # Direct session object
Tools
The BrowserIntegrationToolkit includes the following tools:
-
browser_navigate: Navigate to a URL in the browser
- Input:
url(str, required) - URL to navigate to - Output: JSON with
success(bool),message(str),url(str)
- Input:
-
browser_act: Perform an action on the current browser page
- Input:
action(str, required) - Action to perform on the page - Output: JSON with
success(bool),message(str),action(str)
- Input:
-
browser_screenshot: Take a screenshot of the current browser page
- Input:
file_path(str, required) - File path to save the screenshot - Output: JSON with
success(bool),message(str),file_path(str)
- Input:
Use within an agent
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
# Initialize LLM
llm = ChatOpenAI(
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
model=os.getenv("DASHSCOPE_MODEL", "qwen3-max")
)
# Create agent using the new create_agent method from LangChain v1.0
agent = create_agent(
llm,
tools=tools,
context_schema=AgentContext, # Add context schema for agent identification
system_prompt="""You are a helpful assistant with access to AgentBay browser tools that can automate web browsing operations.
Available tools:
1. browser_navigate - Navigate to a URL in the browser
2. browser_act - Perform an action on the current browser page
3. browser_screenshot - Take a screenshot of the current browser page
Use these tools to help the user accomplish their web browsing automation tasks.
Example workflows:
1. To visit a website and take a screenshot: Navigate to the URL, then use screenshot tool
2. To interact with a webpage: Navigate to the URL, then use act tool to perform actions
3. To search on a website: Navigate to the site, use act to fill search box and submit
When using browser_act, you can perform various actions such as:
- Clicking elements: "Click on the button with text 'Submit'"
- Filling forms: "Fill the input field with label 'Username' with 'john_doe'"
- Selecting options: "Select 'Option 1' from the dropdown with label 'Category'"
- Scrolling: "Scroll down by 300 pixels"
- Waiting: "Wait for 2 seconds"
Always try to be specific about what element you want to interact with, using text, labels, or other identifying features."""
)
# Prepare context with session data
session_data = SessionData()
session_data.browser_session = session
context = AgentContext(session_data=session_data)
# Example usage
example_query = """
任务目标:
这是一个无影AgentBay的SEO效果自动化验证的流程,只有当最后搜索后的第一条结果页面地址是https://www.aliyun.com/product/agentbay.html
的时候认为验证通过,否则验证失败
任务过程:
下面所有截屏都保存在./snapshots/browser/下面
导航到https://www.baidu.com/。
随后,对页面进行截屏,文件名为'0_baidu_first_page.png'。
随后,在输入框中输入'无影AgentBay官网'并点击'百度一下'搜索按钮。
随后,对页面进行截屏,文件名为'1_baidu_search_result_page.png'。
随后,点击搜索结果第一项并且抽取点击后页面的内容。
最后,对页面进行截屏,文件名为'2_first_search_item.png'。
任务返回:
以类似如下json格式返回:
{
"success": true|false,
"message": "验证通过以及其他消息|验证失败原因"
}
"""
result = agent.invoke(
{"messages": [{"role": "user", "content": example_query}]},
context=context,
config={"recursion_limit": 500}
)
# Extract and print the final output
if "messages" in result and len(result["messages"]) > 0:
final_message = result["messages"][-1]
if hasattr(final_message, 'content') and final_message.content:
print(f"Result: {final_message.content}")
else:
print(f"Result: {final_message}")
else:
print(f"Result: {result}")
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_agentbay_integration-0.2.2.tar.gz.
File metadata
- Download URL: langchain_agentbay_integration-0.2.2.tar.gz
- Upload date:
- Size: 68.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.12.0 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c80ea8959f53ad11ae80bc6e086365838794e50533593a7fe914b3f0328df50
|
|
| MD5 |
dad52520a09166026242d7f99cf4b3ab
|
|
| BLAKE2b-256 |
f03750ef471dc04e19e950b102ba7b6bf201ec77079635e7c0a008c1b09cca86
|
File details
Details for the file langchain_agentbay_integration-0.2.2-py3-none-any.whl.
File metadata
- Download URL: langchain_agentbay_integration-0.2.2-py3-none-any.whl
- Upload date:
- Size: 67.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.2.1 CPython/3.12.0 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26a31741ec132a159e9e14cede2a430ad87080ca5bfcb3e8472d4b471732212b
|
|
| MD5 |
512a2074a843d24c0964039e93f3120e
|
|
| BLAKE2b-256 |
2841d5b296f0b957405771dc827575ecd0d46acfe6ebde7bcdb62c8301ba18a3
|