Skip to main content

A multi-agent ecosystem for large language models (LLMs) and autonomous systems.

Project description

๐Ÿ LangSwarm

Multi-Agent AI Orchestration Framework

Build intelligent systems where multiple AI agents collaborate to solve complex tasks. LangSwarm makes it easy to create, orchestrate, and scale AI agent workflows with support for all major LLM providers and a rich ecosystem of tools.

License: MIT Python 3.8+


๐ŸŒŸ Key Features (v0.1.0)

  • ๐Ÿค– Multi-Agent Governance: Built-in approval workflows (ApprovalQueue) for human-in-the-loop control.
  • ๐Ÿ“… Autonomous Scheduling: Schedule recurring tasks and reliable background jobs (JobManager).
  • ๐Ÿง  MemoryPro: Advanced hybrid memory with priority tiers and automatic fading.
  • ๐Ÿ’ฐ Token Budgeting: Real-time cost estimation and strict budget enforcement.
  • ๐Ÿ”Œ Unified Provider: One interface for OpenAI, Anthropic, Gemini, Mistral, and local models.

๐ŸŽฏ What is LangSwarm?

LangSwarm is a framework for multi-agent AI orchestration. Unlike simple chatbot libraries, LangSwarm enables you to:

  • Orchestrate multiple specialized agents working together on complex tasks
  • Build workflows where agents collaborate, hand off work, and combine their outputs
  • Integrate tools through the Model Context Protocol (MCP) for real-world capabilities
  • Support any LLM provider (OpenAI, Anthropic, Google, Mistral, local models, and more)
  • Scale from prototypes to production with enterprise-grade memory, observability, and deployment options

Why Multi-Agent?

Single AI agents hit limits quickly. Multi-agent systems unlock:

  • Specialization: Each agent excels at specific tasks (research, writing, analysis, coding)
  • Collaboration: Agents work together, combining strengths and compensating for weaknesses
  • Scalability: Distribute workload across multiple agents and providers
  • Reliability: Redundancy and validation through multiple perspectives
  • Modularity: Build, test, and deploy agents independently

โšก Quick Start

Installation

pip install langswarm openai
export OPENAI_API_KEY="your-api-key-here"

Simple Agent (30 seconds)

import asyncio
from langswarm import create_agent

async def main():
    # Create an agent
    agent = create_agent(model="gpt-3.5-turbo")
    
    # Chat with it
    response = await agent.chat("What's the capital of France?")
    print(response)

asyncio.run(main())

Multi-Agent Orchestration (Real Power)

from langswarm import create_agent
from langswarm.core.agents import register_agent
from langswarm.core.workflows import create_simple_workflow, get_workflow_engine

# Create specialized agents
researcher = create_agent(
    name="researcher",
    model="gpt-4",
    system_prompt="You are a research specialist. Gather comprehensive information."
)

writer = create_agent(
    name="writer",
    model="gpt-4",
    system_prompt="You are a writing specialist. Create clear, engaging content."
)

# Register for orchestration
register_agent(researcher)
register_agent(writer)

# Create workflow: researcher โ†’ writer
workflow = create_simple_workflow(
    workflow_id="content_creation",
    name="Research and Write",
    agent_chain=["researcher", "writer"]
)

# Execute orchestrated workflow
engine = get_workflow_engine()
result = await engine.execute_workflow(
    workflow=workflow,
    input_data={"input": "Write an article about AI agents"}
)

print(result.output)  # Final result from both agents working together

๐Ÿง  Core Concepts

1. Agents

Agents are AI-powered entities with specific roles and capabilities. LangSwarm supports:

  • Multiple providers: OpenAI, Anthropic (Claude), Google (Gemini), Mistral, Cohere, local models
  • Flexible configuration: System prompts, temperature, tools, memory
  • Built-in capabilities: Streaming, structured outputs, cost tracking
# Simple agent creation
agent = create_agent(model="gpt-4", memory=True)

# Advanced agent with tools
agent = create_agent(
    name="assistant",
    model="gpt-4",
    system_prompt="You are a helpful assistant",
    tools=["filesystem", "web_search"]
)

2. Workflows

Workflows define how agents collaborate:

  • Sequential: Agent A โ†’ Agent B โ†’ Agent C
  • Parallel: Multiple agents work simultaneously
  • Conditional: Route based on results or criteria
  • Nested: Complex multi-stage pipelines
# Simple sequential workflow
workflow = create_simple_workflow("task", "My Task", ["agent1", "agent2"])

# Execute
engine = get_workflow_engine()
result = await engine.execute_workflow(workflow, {"input": "task data"})

3. Tools (MCP)

LangSwarm implements the Model Context Protocol (MCP) for tool integration:

Built-in Tools:

  • filesystem - File operations (read, write, list)
  • web_search - Web search capabilities
  • github - GitHub repository operations
  • sql_database - SQL database access
  • bigquery_vector_search - Semantic search in BigQuery
  • codebase_indexer - Code analysis and understanding
  • workflow_executor - Dynamic workflow execution
  • tasklist - Task management
  • message_queue - Pub/sub message handling
# Agent with tools
agent = create_agent(
    model="gpt-4",
    tools=["filesystem", "web_search"]
)

# Tools are automatically injected
response = await agent.chat("Find the latest Python news and save it to a file")

4. Memory

Conversation history and context management with multiple backends:

  • SQLite: Zero-config, local development
  • Redis: Fast, distributed caching
  • ChromaDB: Vector embeddings and semantic search
  • BigQuery: Analytics-ready, enterprise scale
  • Elasticsearch: Full-text search and analytics
  • Qdrant: High-performance vector search
  • Pinecone: Managed vector database
# Simple memory (in-memory, no persistence)
agent = create_agent(model="gpt-4", memory=True)

# Persistent memory with SQLite
from langswarm.core.memory import create_memory_manager

memory = create_memory_manager(
    backend="sqlite",
    db_path="./conversations.db"
)

agent = create_agent(
    model="gpt-4",
    memory=True,
    memory_manager=memory
)

๐Ÿ—๏ธ Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    LangSwarm Framework                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
โ”‚  โ”‚   Agents     โ”‚  โ”‚  Workflows   โ”‚  โ”‚    Tools     โ”‚ โ”‚
โ”‚  โ”‚              โ”‚  โ”‚              โ”‚  โ”‚              โ”‚ โ”‚
โ”‚  โ”‚ โ€ข OpenAI     โ”‚  โ”‚ โ€ข Sequential โ”‚  โ”‚ โ€ข MCP Local  โ”‚ โ”‚
โ”‚  โ”‚ โ€ข Anthropic  โ”‚  โ”‚ โ€ข Parallel   โ”‚  โ”‚ โ€ข MCP Remote โ”‚ โ”‚
โ”‚  โ”‚ โ€ข Google     โ”‚  โ”‚ โ€ข Conditionalโ”‚  โ”‚ โ€ข Built-in   โ”‚ โ”‚
โ”‚  โ”‚ โ€ข Mistral    โ”‚  โ”‚ โ€ข Nested     โ”‚  โ”‚ โ€ข Custom     โ”‚ โ”‚
โ”‚  โ”‚ โ€ข Local      โ”‚  โ”‚              โ”‚  โ”‚              โ”‚ โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
โ”‚                                                         โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                         โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚
โ”‚  โ”‚           Infrastructure Layer                  โ”‚   โ”‚
โ”‚  โ”‚                                                 โ”‚   โ”‚
โ”‚  โ”‚  Memory     Session      Observability         โ”‚   โ”‚
โ”‚  โ”‚  โ€ข SQLite   โ€ข Storage    โ€ข OpenTelemetry       โ”‚   โ”‚
โ”‚  โ”‚  โ€ข Redis    โ€ข Providers  โ€ข Tracing             โ”‚   โ”‚
โ”‚  โ”‚  โ€ข ChromaDB โ€ข Lifecycle  โ€ข Metrics             โ”‚   โ”‚
โ”‚  โ”‚  โ€ข BigQuery โ€ข Management                       โ”‚   โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ“š Use Cases

Content Creation Pipeline

from langswarm import create_agent
from langswarm.core.agents import register_agent
from langswarm.core.workflows import create_simple_workflow, get_workflow_engine

# Specialized agents
researcher = create_agent(
    name="researcher",
    model="gpt-4",
    system_prompt="Research topics thoroughly"
)

writer = create_agent(
    name="writer",
    model="gpt-4",
    system_prompt="Write engaging content"
)

editor = create_agent(
    name="editor",
    model="gpt-4",
    system_prompt="Edit and polish"
)

# Register all
for agent in [researcher, writer, editor]:
    register_agent(agent)

# Workflow: research โ†’ write โ†’ edit
workflow = create_simple_workflow(
    workflow_id="content",
    name="Content Pipeline",
    agent_chain=["researcher", "writer", "editor"]
)

# Execute
result = await get_workflow_engine().execute_workflow(
    workflow, {"input": "AI in Healthcare"}
)

Code Analysis & Documentation

# Agent with code analysis tools
coder = create_agent(
    model="gpt-4",
    tools=["codebase_indexer", "filesystem", "github"]
)

# Analyze and document
result = await coder.chat(
    "Analyze the repository, find all API endpoints, and create documentation"
)

Customer Support System

# Multiple agents for different tasks
classifier = create_agent(system_prompt="Classify customer inquiries")
support = create_agent(system_prompt="Provide support answers", tools=["bigquery_vector_search"])
escalation = create_agent(system_prompt="Handle escalations")

# Conditional workflow based on classification
# (See docs for advanced workflow patterns)

๐Ÿ”ง Configuration

LangSwarm uses code-first configuration for maximum flexibility and type safety. Configure everything programmatically in Python:

Simple Configuration

from langswarm import create_agent

# Quick start - minimal config
agent = create_agent(model="gpt-4")

# Common configuration options
agent = create_agent(
    name="assistant",
    model="gpt-4",
    system_prompt="You are a helpful assistant",
    memory=True,
    tools=["filesystem", "web_search"],
    temperature=0.7,
    max_tokens=2000,
    stream=False
)

Advanced Configuration with Builder Pattern

from langswarm.core.agents import AgentBuilder

# Full control with builder pattern (Unified Provider)
agent = await (
    AgentBuilder()
    .name("advanced-assistant")
    .litellm()  # Unified provider (supports OpenAI, Anthropic, etc.)
    .model("gpt-4")
    .system_prompt("You are a helpful assistant")
    .tools(["filesystem", "web_search", "github"])
    .memory_enabled(True)
    .streaming(True)
    .temperature(0.7)
    .max_tokens(4000)
    .timeout(60)
    .build()
)

Automatic Observability with LangFuse

LangSwarm automatically enables LangFuse tracing and prompt management when environment variables are set:

# Set these environment variables
export LANGFUSE_PUBLIC_KEY="pk-lf-..."
export LANGFUSE_SECRET_KEY="sk-lf-..."
export LANGFUSE_HOST="https://cloud.langfuse.com"  # Optional
export OBSERVABILITY_DISABLE_TRACING="true" # Optional: Disable tracing but keep client active (e.g. for prompts)

Zero configuration needed! Just set the env vars and all LiteLLM calls are automatically traced:

# LangFuse is automatically enabled!
agent = await (
    AgentBuilder()
    .litellm()
    .model("gpt-4")
    .build()
)

# All interactions are now traced in LangFuse
response = await agent.chat("Hello!")

Manual override (if you need explicit configuration):

# Explicitly configure LangFuse (overrides environment variables)
agent = await (
    AgentBuilder()
    .litellm()
    .model("gpt-4")
    .observability(
        provider="langfuse",
        public_key="pk-lf-...",
        secret_key="sk-lf-...",
        host="https://cloud.langfuse.com"
    )
    .build()
)

What you get with LangFuse:

  • ๐Ÿ“Š Full trace of all LLM calls with timing and costs
  • ๐ŸŽฏ Prompt versioning and management
  • ๐Ÿ’ฐ Automatic cost tracking per trace
  • ๐Ÿ“ˆ Performance monitoring (latency, tokens, errors)
  • ๐Ÿ‘ฅ User analytics and session tracking
  • ๐Ÿ› Complete debugging with conversation history

Installation:

pip install langswarm[observability]
# or separately
pip install langfuse

Provider-Specific Configuration

# OpenAI
agent = create_agent(
    model="gpt-4",
    api_key="your-key-here",  # or use OPENAI_API_KEY env var
    temperature=0.7
)

# Anthropic (Claude)
from langswarm.core.agents import AgentBuilder

agent = await (
    AgentBuilder()
    .anthropic(api_key="your-key-here")  # or use ANTHROPIC_API_KEY
    .model("claude-3-5-sonnet-20241022")
    .build()
)

# Google (Gemini)
agent = await (
    AgentBuilder()
    .gemini(api_key="your-key-here")  # or use GOOGLE_API_KEY
    .model("gemini-pro")
    .build()
)

Memory Configuration

# Simple in-memory (default)
agent = create_agent(model="gpt-4", memory=True)

# Advanced memory with custom settings
from langswarm.core.memory import create_memory_manager

memory_manager = create_memory_manager(
    backend="sqlite",
    db_path="./conversations.db"
)

agent = create_agent(
    model="gpt-4",
    memory=True,
    memory_manager=memory_manager
)

๐Ÿš€ Advanced Features

Streaming Responses

agent = create_agent(model="gpt-4")

async for chunk in agent.chat_stream("Tell me a story"):
    print(chunk, end="", flush=True)

Cost Tracking

agent = create_agent(model="gpt-4", track_costs=True)

await agent.chat("Hello!")

stats = agent.get_usage_stats()
print(f"Tokens used: {stats['total_tokens']}")
print(f"Estimated cost: ${stats['estimated_cost']}")

Structured Outputs

from pydantic import BaseModel

class UserInfo(BaseModel):
    name: str
    age: int
    email: str

agent = create_agent(model="gpt-4")
result = await agent.chat(
    "Extract: John Doe, 30 years old, john@example.com",
    response_format=UserInfo
)
# result is a UserInfo instance

Observability (OpenTelemetry)

from langswarm.observability import enable_instrumentation

# Enable tracing
enable_instrumentation(
    service_name="my-agents",
    exporter="jaeger",  # or "otlp", "prometheus"
    endpoint="http://localhost:14268/api/traces"
)

# All agent/workflow operations now traced

๐Ÿ› ๏ธ MCP Tool Development

Create custom tools using the Model Context Protocol:

from langswarm.tools import UnifiedTool
from langswarm.core.errors import ErrorContext

class MyCustomTool(UnifiedTool):
    """Custom tool for specific operations"""
    
    metadata = {
        "name": "My Custom Tool",
        "description": "Does something specific",
        "version": "1.0.0"
    }
    
    async def execute(self, input_data: dict, context: ErrorContext = None) -> dict:
        """Main execution method"""
        operation = input_data.get("operation")
        
        if operation == "do_something":
            result = await self._do_something(input_data)
            return {"success": True, "result": result}
        else:
            return {"success": False, "error": f"Unknown operation: {operation}"}
    
    async def _do_something(self, data: dict):
        # Your tool logic here
        return {"message": "Operation completed"}

# Register and use
from langswarm.tools import ToolRegistry

registry = ToolRegistry()
registry.register_tool(MyCustomTool())

# Now available to agents
agent = create_agent(model="gpt-4", tools=["my_custom_tool"])

๐Ÿ“– Documentation

๐Ÿ“‹ Main Resources

๐Ÿš€ Core Features

๐Ÿ”ง Developer Guides


๐ŸŽฏ Production Deployment

Docker

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
CMD ["python", "app.py"]

Environment Variables

# Required
export OPENAI_API_KEY="sk-..."

# Optional providers
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."

# Observability & Tracing
export LANGFUSE_PUBLIC_KEY="pk-lf-..."      # Auto-enables LangFuse tracing
export LANGFUSE_SECRET_KEY="sk-lf-..."     # Required with public key
export LANGFUSE_HOST="https://cloud.langfuse.com"  # Optional
export LANGSMITH_API_KEY="..."             # Alternative observability
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318"

# Memory backends
export REDIS_URL="redis://localhost:6379"
export BIGQUERY_PROJECT="my-project"
export CHROMADB_PATH="./data/chromadb"

Cloud Deployment

LangSwarm supports deployment to:

  • Google Cloud Platform (Cloud Run, Cloud Functions, GKE)
  • AWS (Lambda, ECS, EKS)
  • Azure (Functions, Container Apps, AKS)

See deployment documentation for platform-specific guides.


๐Ÿงช Testing

# Install dev dependencies
pip install -e .[dev]

# Run tests
pytest tests/

# Run specific test suite
pytest tests/unit/
pytest tests/integration/
pytest tests/e2e/

# Run examples
cd examples/simple
python 01_basic_chat.py

๐Ÿค Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

# Clone repository
git clone https://github.com/aekdahl/langswarm.git
cd langswarm

# Install in development mode
pip install -e .[dev]

# Run tests
pytest

# Run examples
cd examples/simple && python test_all_examples.py

๐Ÿ“Š Supported Providers

Provider Status Models Notes
Unified (LiteLLM) โœ… Stable All Providers Universal adapter, failover, observability
OpenAI โš ๏ธ Legacy GPT-4, GPT-3.5, etc. Use Unified provider
Anthropic โš ๏ธ Legacy Claude 3.5, Claude 3 Use Unified provider
Google โš ๏ธ Legacy Gemini Pro, Gemini Pro Vision Use Unified provider
Mistral โš ๏ธ Legacy Mixtral, Mistral Large Use Unified provider
Cohere โš ๏ธ Legacy Command R+, Command R Use Unified provider
Hugging Face โš ๏ธ Legacy Open source models Use Unified provider
Local โš ๏ธ Legacy Ollama, LocalAI, etc. Use Unified provider
Custom โœ… Beta Any OpenAI-compatible API Community template

๐Ÿ› ๏ธ Built-in MCP Tools

Tool Description Status
filesystem File operations (read, write, list) โœ… Stable
web_search Web search capabilities โœ… Stable
github GitHub repository operations โœ… Stable
sql_database SQL database access โœ… Stable
bigquery_vector_search Semantic search in BigQuery โœ… Stable
codebase_indexer Code analysis and search โœ… Stable
workflow_executor Dynamic workflow execution โœ… Stable
tasklist Task management โœ… Stable
message_queue_publisher Publish to message queues โœ… Stable
message_queue_consumer Consume from message queues โœ… Stable
realtime_voice OpenAI Realtime API integration โœ… Beta
daytona_environment Dev environment management โœ… Beta
gcp_environment GCP resource management โœ… Beta
dynamic_forms Dynamic form generation โœ… Beta

๐Ÿ“ License

LangSwarm is MIT licensed. See LICENSE for details.


๐Ÿ™‹ Support


๐ŸŽ‰ Examples

See the examples/simple/ directory for 10 working examples:

  1. Basic Chat - Simple agent conversation
  2. Memory Chat - Agent with conversation memory
  3. Two Agents - Multiple agents working together
  4. Different Models - Using different LLM providers
  5. With Tools - Agents using tools (filesystem, web search)
  6. Workflow - Sequential agent workflows
  7. Web Search - Agent with web search capabilities
  8. Streaming Response - Real-time streaming responses
  9. Cost Tracking - Tracking token usage and costs
  10. Advanced Configuration - Full builder pattern examples

Each example is 10-30 lines of code and fully working.


๐Ÿš€ Quick Links


Built with โค๏ธ by the LangSwarm community

Project details


Release history Release notifications | RSS feed

This version

0.1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langswarm-0.1.0.tar.gz (808.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langswarm-0.1.0-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file langswarm-0.1.0.tar.gz.

File metadata

  • Download URL: langswarm-0.1.0.tar.gz
  • Upload date:
  • Size: 808.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.11.14 Linux/6.11.0-1018-azure

File hashes

Hashes for langswarm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4f7946e73b247ab2cfba7572e6d3899899bb215e4c2e7ca04066c663f3d113e0
MD5 ee3c1bc6962169c8459cbad7afd5e52e
BLAKE2b-256 45f8f0743c4e58d601206962632808b6c7b721f3a9ca889a7a6e3e551f08899d

See more details on using hashes here.

File details

Details for the file langswarm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: langswarm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.11.14 Linux/6.11.0-1018-azure

File hashes

Hashes for langswarm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 246cb0d512a96160e5247efa37fe6bf311f8d5c9f8165de2123cb35e29cb4fc7
MD5 c5e0e37d33caae1ab866f70973b3533f
BLAKE2b-256 bfd24ebb24b4d5447f020c1309722c642c15a1c74a0f50ad039e7509a2d4e511

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page