Skip to main content

Universal AI Agent supporting multiple LLM providers (Anthropic, OpenAI, Gemini, Groq, DeepSeek)

Project description

GiantKelp AI

Universal AI Agent supporting multiple LLM providers with a single, unified interface

Python 3.8+ License: MIT

Built by GiantKelp - AI Agency in London


Overview

GiantKelp AI is a powerful, provider-agnostic Python library that gives you a unified interface to interact with multiple leading LLM providers. Write your code once and switch between providers seamlessly - no need to learn different APIs or refactor your codebase.

Why GiantKelp AI?

  • 🔄 Provider Flexibility: Switch between Anthropic, OpenAI, Gemini, Groq, and DeepSeek without changing your code
  • 🎯 Smart Model Selection: Automatically use smart, fast, or reasoning models based on your needs
  • 📄 Rich Media Support: Handle text, images, and documents (PDFs) with the same simple interface
  • 🌐 Web Search Integration: Native web search capabilities where supported
  • 🤖 Agent Teams: Build sophisticated multi-agent systems with handoffs (optional)
  • ⚡ Streaming Support: Real-time response streaming across all providers
  • 📊 Usage Tracking: Optional Redis stream integration for token usage monitoring
  • 🛡️ Production Ready: Comprehensive error handling, logging, and type hints

Supported Providers

Provider Text Vision Documents Web Search Reasoning
Anthropic (Claude)
OpenAI
Google Gemini
Groq
DeepSeek

Installation

Basic Installation

pip install giantkelp-ai

With Agent Support

pip install giantkelp-ai[agents]

With Redis Usage Tracking

pip install giantkelp-ai redis

Quick Start

from giantkelp_ai import AIAgent

# Initialize with your preferred provider
agent = AIAgent(provider="anthropic")

# Get a response
response = agent.fast_completion("What is the capital of France?")
print(response)  # "Paris is the capital of France."

With Agent Naming (for usage tracking)

# Name your agent for usage tracking and analytics
agent = AIAgent(provider="anthropic", agent_name="customer_support")

response = agent.smart_completion("Help me with my order")

Configuration

Environment Variables

Set your API keys as environment variables:

export ANTHROPIC_API_KEY="your-anthropic-key"
export OPENAI_API_KEY="your-openai-key"
export GEMINI_API_KEY="your-gemini-key"
export GROQ_API_KEY="your-groq-key"
export DEEPSEEK_API_KEY="your-deepseek-key"

# Optional global settings
export MAX_TOKENS=5000
export TEMPERATURE=0.1

Using .env File

Create a .env file in your project root:

ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GEMINI_API_KEY=your-gemini-key
GROQ_API_KEY=your-groq-key
DEEPSEEK_API_KEY=your-deepseek-key

MAX_TOKENS=5000
TEMPERATURE=0.1

Core Features

1. Text Completions

Choose from three model tiers for different use cases:

Fast Completion (Optimized for Speed)

agent = AIAgent(provider="anthropic")

response = agent.fast_completion(
    user_prompt="Translate 'hello' to Spanish",
    system_prompt="You are a helpful translator",
    max_tokens=100,
    temperature=0.1
)
print(response)  # "Hola"

Smart Completion (Balanced Performance)

response = agent.smart_completion(
    user_prompt="Explain quantum entanglement",
    system_prompt="You are a physics professor",
    max_tokens=500,
    temperature=0.7
)

Reasoning Completion (Advanced Problem Solving)

response = agent.reasoning_completion(
    user_prompt="Solve this complex math problem: ...",
    max_tokens=2000
)

2. Streaming Responses

Get real-time responses as they're generated:

stream = agent.fast_completion(
    user_prompt="Write a short story about a robot",
    stream=True
)

for chunk in agent.normalize_stream(stream):
    print(chunk, end="", flush=True)

3. JSON Output Mode

Request structured JSON responses:

response = agent.fast_completion(
    user_prompt="List 5 fruits with their colors",
    json_output=True
)

print(response)
# {
#     "fruits": [
#         {"name": "apple", "color": "red"},
#         {"name": "banana", "color": "yellow"},
#         ...
#     ]
# }

4. Image Analysis

Analyze images with vision-capable models:

# From file path
response = agent.image_completion(
    user_prompt="What objects are in this image?",
    image="path/to/image.jpg",
    file_path=True
)

# From base64 data
response = agent.image_completion(
    user_prompt="Describe this image",
    image=base64_image_data,
    file_path=False
)

# Use smart model for complex analysis
response = agent.image_completion(
    user_prompt="Analyze the composition and artistic style",
    image="artwork.jpg",
    smart_model=True
)

5. Document Processing

Process PDF documents with automatic text extraction:

# Single document processing
response = agent.document_completion(
    user_prompt="Summarize this document",
    document="report.pdf",
    smart_model=True
)

# Process each page independently
results = agent.document_completion(
    user_prompt="Extract key points from each page",
    document="multi-page-report.pdf",
    split_into_pages=True
)

# Results is a dict: {1: "Page 1 summary", 2: "Page 2 summary", ...}
for page_num, summary in results.items():
    print(f"Page {page_num}: {summary}")

6. Web Search

Perform real-time web searches (provider-dependent):

# Basic web search
response = agent.web_search(
    query="Latest developments in AI 2025",
    scope="smart"
)

# With system prompt
response = agent.web_search(
    query="Best practices for Python async programming",
    system="You are a senior Python developer",
    scope="fast"
)

# With location-based search
response = agent.web_search(
    query="Local restaurants",
    country_code="GB",
    city="London",
    scope="fast"
)

# With reasoning model
response = agent.web_search(
    query="Compare the economic impacts of renewable energy",
    scope="reasoning",
    thinking_budget=5000  # Anthropic only
)

Advanced Features

Agent Teams with Handoffs

Build sophisticated multi-agent systems that can delegate tasks to specialized agents:

agent = AIAgent(provider="anthropic")

# Create a team of specialized agents
agent.create_handoff_team([
    {
        "name": "triage",
        "instructions": "You are a customer service triage agent. Route inquiries to the appropriate specialist.",
        "type": "smart",
        "handoffs_to": ["billing", "technical", "sales"]
    },
    {
        "name": "billing",
        "instructions": "You handle all billing and payment-related questions. Be clear and concise.",
        "type": "fast",
        "handoffs_to": ["escalation"]
    },
    {
        "name": "technical",
        "instructions": "You provide technical support and troubleshooting. Be detailed and helpful.",
        "type": "fast",
        "handoffs_to": ["escalation"]
    },
    {
        "name": "sales",
        "instructions": "You handle sales inquiries and product questions. Be persuasive and informative.",
        "type": "fast"
    },
    {
        "name": "escalation",
        "instructions": "You handle complex issues requiring deep reasoning and nuanced judgment.",
        "type": "reasoning"
    }
])

# Run an agent
response = agent.run_agent(
    user_prompt="I'm having trouble with my last payment",
    agent_name="triage"
)

# The triage agent will automatically hand off to billing if needed
print(response)

Creating Individual Agents

# Create a single agent
support_agent = agent.create_agent_sdk_agent(
    name="support",
    instructions="You are a friendly customer support agent.",
    agent_type="smart",
    store=True
)

# Create agent with custom tools
from my_tools import calculator, database_query

analyst_agent = agent.create_agent_sdk_agent(
    name="analyst",
    instructions="You analyze data and provide insights.",
    agent_type="reasoning",
    tools=[calculator, database_query]
)

# List all agents
print(agent.list_agents())  # ['support', 'analyst']

# Get a specific agent
my_agent = agent.get_agent("support")

Async Agent Execution

import asyncio

async def main():
    agent = AIAgent(provider="anthropic")

    # Create agent
    agent.create_agent_sdk_agent(
        name="assistant",
        instructions="You are a helpful assistant."
    )

    # Run asynchronously
    response_coro = agent.run_agent(
        user_prompt="What's the weather like?",
        agent_name="assistant",
        async_mode=True
    )

    response = await response_coro
    print(response)

asyncio.run(main())

Usage Tracking with Redis

Track token usage across all your AI agents by sending usage events to a Redis stream. This is useful for monitoring costs, analyzing usage patterns, and billing.

Setup

from giantkelp_ai import AIAgent, configure_redis, RedisUsageConfig, is_redis_configured

# Configure Redis (call once at app startup)
configure_redis(RedisUsageConfig(
    redis_url="redis://localhost:6379",
    stream_key="myapp:ai_usage",  # Redis stream key
    client_id="my_application"     # Identifies your app in usage events
))

# Verify configuration
if is_redis_configured():
    print("Redis usage tracking enabled!")

Usage Events

Once configured, every completion automatically sends usage data to the Redis stream:

# All completion types are tracked
agent = AIAgent(provider="anthropic", agent_name="support_bot")

# Text completions
response = agent.fast_completion("Hello!")

# Streaming completions (usage captured at end of stream)
stream = agent.smart_completion("Write a story", stream=True)
for chunk in agent.normalize_stream(stream):
    print(chunk, end="")

# Image completions
response = agent.image_completion("Describe this", image="photo.jpg", file_path=True)

# Document completions
response = agent.document_completion("Summarize", document="report.pdf", file_path=True)

# Web search
response = agent.web_search("Latest AI news")

Event Data Structure

Each event in the Redis stream contains:

{
  "type": "usage_event",
  "payload": {
    "provider": "anthropic",
    "model": "claude-haiku-4-5",
    "input_tokens": 150,
    "output_tokens": 250,
    "agent_name": "support_bot",
    "client_id": "my_application",
    "message": "What is the capital of France?",
    "timestamp": "2025-02-19T10:30:00.000Z"
  }
}

Note: The message field contains the first 100 characters of the user prompt.

Reading Usage Events

import redis
import json

client = redis.from_url("redis://localhost:6379")

# Get recent usage events
messages = client.xrevrange("myapp:ai_usage", count=10)

for msg_id, data in messages:
    event = json.loads(data[b"job"])
    payload = event["payload"]
    print(f"Model: {payload['model']}, Tokens: {payload['input_tokens']} in / {payload['output_tokens']} out")

Environment Variables

You can also configure Redis via environment variables:

REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=your_password  # Optional

Model Selection Guide

When to Use Each Model Tier

Model Tier Best For Examples
Fast Quick responses, simple tasks, high-volume requests Translations, classifications, simple Q&A
Smart Complex reasoning, detailed analysis, creative tasks Content generation, code review, strategy
Reasoning Deep problem-solving, multi-step reasoning, expert-level analysis Research, mathematical proofs, complex debugging

Provider-Specific Models

# Anthropic
agent = AIAgent(provider="anthropic")
# Fast: claude-haiku-4-5
# Smart: claude-sonnet-4-5
# Reasoning: claude-opus-4-1

# OpenAI
agent = AIAgent(provider="openai")
# Fast: gpt-4o-mini
# Smart: gpt-4o
# Reasoning: o3

# Gemini
agent = AIAgent(provider="gemini")
# Fast: gemini-2.5-flash
# Smart: gemini-2.5-pro
# Reasoning: gemini-2.5-pro

# Groq
agent = AIAgent(provider="groq")
# Fast: llama-3.1-8b-instant
# Smart: llama-3.3-70b-versatile
# Reasoning: llama-3.3-70b-versatile

# DeepSeek
agent = AIAgent(provider="deepseek")
# Fast: deepseek-chat
# Smart: deepseek-chat
# Reasoning: deepseek-reasoner

Switching Providers

One of the key benefits of GiantKelp AI is provider flexibility:

# Start with Anthropic
agent = AIAgent(provider="anthropic")
response1 = agent.smart_completion("Explain AI")

# Switch to OpenAI (same code!)
agent = AIAgent(provider="openai")
response2 = agent.smart_completion("Explain AI")

# Switch to Groq (same code!)
agent = AIAgent(provider="groq")
response3 = agent.smart_completion("Explain AI")

# All three work identically!

Error Handling

GiantKelp AI provides comprehensive error handling:

from giantkelp_ai import AIAgent

try:
    agent = AIAgent(provider="anthropic")
    response = agent.smart_completion("Hello")
except ValueError as e:
    # Configuration or input errors
    print(f"Configuration error: {e}")
except RuntimeError as e:
    # API or operational errors
    print(f"Runtime error: {e}")
except FileNotFoundError as e:
    # File-related errors (images, documents)
    print(f"File error: {e}")
except NotImplementedError as e:
    # Feature not supported by provider
    print(f"Feature not available: {e}")

Logging and Debugging

Enable verbose logging for debugging:

import logging

# Configure logging
logging.basicConfig(level=logging.INFO)

# Enable verbose mode
agent = AIAgent(provider="anthropic", verbose=True)

# Now all operations will be logged
response = agent.smart_completion("Test")

Examples

Example 1: Content Generation

from giantkelp_ai import AIAgent

agent = AIAgent(provider="anthropic")

blog_post = agent.smart_completion(
    user_prompt="Write a 300-word blog post about the future of AI in healthcare",
    system_prompt="You are a professional medical technology writer",
    max_tokens=500,
    temperature=0.7
)

print(blog_post)

Example 2: Image Analysis Pipeline

from giantkelp_ai import AIAgent
import os

agent = AIAgent(provider="openai")

# Analyze multiple images
image_folder = "product_photos/"
analyses = []

for filename in os.listdir(image_folder):
    if filename.endswith((".jpg", ".png")):
        analysis = agent.image_completion(
            user_prompt="Describe this product image for an e-commerce catalog",
            image=os.path.join(image_folder, filename),
            smart_model=True,
            json_output=True
        )
        analyses.append({
            "filename": filename,
            "analysis": analysis
        })

print(analyses)

Example 3: Document Summarization

from giantkelp_ai import AIAgent

agent = AIAgent(provider="gemini")

# Summarize a research paper
summary = agent.document_completion(
    user_prompt="""
    Provide a structured summary with:
    1. Main findings
    2. Methodology
    3. Conclusions
    4. Limitations
    """,
    document="research_paper.pdf",
    smart_model=True,
    max_tokens=1000
)

print(summary)

Example 4: Multi-Provider Comparison

from giantkelp_ai import AIAgent

providers = ["anthropic", "openai", "gemini", "groq"]
prompt = "What is the meaning of life?"

results = {}
for provider in providers:
    try:
        agent = AIAgent(provider=provider)
        response = agent.fast_completion(prompt)
        results[provider] = response
    except Exception as e:
        results[provider] = f"Error: {e}"

for provider, response in results.items():
    print(f"\n{provider.upper()}:")
    print(response)

Example 5: Intelligent Customer Support

from giantkelp_ai import AIAgent

agent = AIAgent(provider="anthropic")

# Create support team
agent.create_handoff_team([
    {
        "name": "receptionist",
        "instructions": """
        You are the first point of contact. Be warm and welcoming.
        Understand the customer's needs and route them to the right specialist.
        """,
        "type": "fast",
        "handoffs_to": ["technical", "billing", "general"]
    },
    {
        "name": "technical",
        "instructions": "You solve technical problems. Be patient and thorough.",
        "type": "smart"
    },
    {
        "name": "billing",
        "instructions": "You handle billing inquiries. Be clear and accurate.",
        "type": "fast"
    },
    {
        "name": "general",
        "instructions": "You handle general questions and provide information.",
        "type": "fast"
    }
])

# Handle customer inquiry
customer_message = "I'm having trouble logging into my account"
response = agent.run_agent(customer_message, agent_name="receptionist")
print(response)

API Reference

AIAgent Class

Constructor

AIAgent(provider: str = "anthropic", verbose: bool = False, agent_name: str = "general")

Parameters:

  • provider (str): LLM provider name - "anthropic", "openai", "gemini", "groq", or "deepseek"
  • verbose (bool): Enable detailed logging
  • agent_name (str): Name for this agent instance, used in usage tracking events (default: "general")

Methods

Text Completion Methods

fast_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)

Fast model completion for quick responses.

smart_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)

Smart model completion for complex tasks.

reasoning_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)

Reasoning model completion for advanced problem-solving.

Parameters:

  • user_prompt (str): User's input text
  • system_prompt (str, optional): System instructions
  • max_tokens (int, optional): Maximum tokens to generate
  • temperature (float, optional): Sampling temperature (0.0-1.0)
  • stream (bool): Enable streaming responses
  • json_output (bool): Request JSON formatted output

Returns: str or dict (if json_output=True) or stream object (if stream=True)

Image Analysis

image_completion(user_prompt, image, file_path=True, smart_model=False, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)

Analyze images using vision-capable models.

Parameters:

  • user_prompt (str): Question or instruction about the image
  • image (str or bytes): Image file path or base64 data
  • file_path (bool): True if image is a file path, False if base64
  • smart_model (bool): Use smart model instead of fast
  • Other parameters same as completion methods

Returns: str or dict or stream object

Document Processing

document_completion(user_prompt, document, file_path=True, smart_model=False, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False, split_into_pages=False)

Process PDF documents.

Parameters:

  • user_prompt (str): Question or instruction about the document
  • document (str or bytes): Document file path or bytes
  • file_path (bool): True if document is a file path
  • smart_model (bool): Use smart model instead of fast
  • split_into_pages (bool): Process each page independently
  • Other parameters same as completion methods

Returns: str or dict or stream object, or dict of page results if split_into_pages=True

Web Search

web_search(query, system=None, scope="fast", max_tokens=10000, temperature=None, max_results=20, thinking_budget=5000, country_code=None, city=None)

Perform real-time web searches.

Parameters:

  • query (str): Search query
  • system (str, optional): System prompt
  • scope (str): "smart", "fast", or "reasoning"
  • max_tokens (int): Maximum tokens
  • temperature (float, optional): Sampling temperature
  • max_results (int): Hint for number of results
  • thinking_budget (int, optional): Thinking token budget (Anthropic only)
  • country_code (str, optional): Country code for location-based search
  • city (str, optional): City name for location-based search

Returns: str

Agent Methods

create_agent_sdk_agent(name, instructions, agent_type="smart", handoffs=[], store=True, **agent_kwargs)

Create an OpenAI Agents SDK agent.

create_handoff_team(team_config)

Create a team of agents with handoff relationships.

run_agent(user_prompt, agent=None, agent_name=None, async_mode=False, **runner_kwargs)

Execute a stored agent.

get_agent(name)

Retrieve a stored agent by name.

list_agents()

List all stored agents.

Utility Methods

normalize_stream(stream)

Normalize streaming responses to yield text chunks.

clean_json_output(text)

Parse and clean JSON output from LLM responses.

Redis Usage Tracking Functions

configure_redis

configure_redis(config: RedisUsageConfig) -> None

Configure Redis for usage tracking. Call once at application startup.

Parameters:

  • config (RedisUsageConfig): Redis configuration object

RedisUsageConfig

RedisUsageConfig(
    redis_url: str,
    stream_key: str = "giantkelp:usage",
    client_id: str = "default"
)

Parameters:

  • redis_url (str): Redis connection URL (e.g., "redis://localhost:6379" or "redis://:password@host:port")
  • stream_key (str): Redis stream key for usage events (default: "giantkelp:usage")
  • client_id (str): Identifier for your application in usage events (default: "default")

is_redis_configured

is_redis_configured() -> bool

Check if Redis usage tracking is configured and connected.

Returns: True if Redis is configured and ready, False otherwise.


Best Practices

1. Choose the Right Model Tier

# Use fast for simple, high-volume tasks
summaries = [
    agent.fast_completion(f"Summarize: {text}")
    for text in texts
]

# Use smart for important, complex tasks
strategy = agent.smart_completion(
    "Develop a market entry strategy for...",
    max_tokens=2000
)

# Use reasoning for critical decisions
analysis = agent.reasoning_completion(
    "Analyze the risks and opportunities of..."
)

2. Implement Proper Error Handling

def safe_completion(agent, prompt):
    try:
        return agent.smart_completion(prompt)
    except RuntimeError as e:
        # Log and retry with different provider
        logger.error(f"Provider failed: {e}")
        backup_agent = AIAgent(provider="groq")
        return backup_agent.smart_completion(prompt)
    except Exception as e:
        logger.error(f"Unexpected error: {e}")
        return None

3. Use Streaming for Long Responses

# Better user experience with streaming
stream = agent.smart_completion(
    "Write a comprehensive guide to...",
    stream=True
)

for chunk in agent.normalize_stream(stream):
    print(chunk, end="", flush=True)
    # Update UI in real-time

4. Leverage JSON Mode for Structured Data

# Request structured output
user_data = agent.fast_completion(
    f"Extract name, email, and phone from: {text}",
    json_output=True
)

# Now you can use the structured data
send_email(user_data['email'])

5. Optimize Token Usage

# Be specific with max_tokens
agent.fast_completion(
    "Yes or no: Is this spam?",
    max_tokens=5  # Only need a short answer
)

# Use appropriate temperature
agent.smart_completion(
    "Generate creative story ideas",
    temperature=0.9  # Higher for creativity
)

agent.fast_completion(
    "What is 2+2?",
    temperature=0.1  # Lower for factual answers
)

Performance Tips

  1. Batch Processing: Process multiple items in parallel when possible
  2. Caching: Cache responses for repeated queries
  3. Provider Selection: Choose providers based on your use case (cost, speed, capabilities)
  4. Model Tiering: Use fast models for simple tasks, save smart/reasoning for complex ones
  5. Streaming: Use streaming for long-form content to improve perceived performance

Troubleshooting

Common Issues

Issue: "API key not found"

# Solution: Set environment variable
import os
os.environ['ANTHROPIC_API_KEY'] = 'your-key'
agent = AIAgent(provider="anthropic")

Issue: "Vision not supported for X provider"

# Solution: Use a provider that supports vision
agent = AIAgent(provider="anthropic")  # Supports vision
# or
agent = AIAgent(provider="openai")     # Supports vision

Issue: "Document processing failed"

# Solution: Check file exists and is a valid PDF
import os
if os.path.exists("document.pdf"):
    response = agent.document_completion(
        "Summarize",
        "document.pdf"
    )

Issue: "Rate limit exceeded"

# Solution: Implement retry logic with exponential backoff
import time

def completion_with_retry(agent, prompt, max_retries=3):
    for attempt in range(max_retries):
        try:
            return agent.fast_completion(prompt)
        except RuntimeError as e:
            if "rate limit" in str(e).lower():
                wait = 2 ** attempt
                time.sleep(wait)
            else:
                raise
    raise RuntimeError("Max retries exceeded")

Support


About GiantKelp

GiantKelp is an AI agency based in London, specializing in cutting-edge artificial intelligence solutions for businesses. We build intelligent systems that help organizations leverage the power of AI effectively.

Visit us: www.giantkelp.com


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giantkelp_ai-0.1.9.tar.gz (31.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giantkelp_ai-0.1.9-py3-none-any.whl (26.1 kB view details)

Uploaded Python 3

File details

Details for the file giantkelp_ai-0.1.9.tar.gz.

File metadata

  • Download URL: giantkelp_ai-0.1.9.tar.gz
  • Upload date:
  • Size: 31.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.5.0

File hashes

Hashes for giantkelp_ai-0.1.9.tar.gz
Algorithm Hash digest
SHA256 10f99dbc821fed33d785faeb212102e6fad5e4a3745756a471779e5a7ea58e56
MD5 7715b6dae72b8ec70f3231bfaf608144
BLAKE2b-256 96d3c1483c06c613c01a9d09605b7702b4fb6c1516009a650bab554bb9a5593f

See more details on using hashes here.

File details

Details for the file giantkelp_ai-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: giantkelp_ai-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 26.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.5.0

File hashes

Hashes for giantkelp_ai-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 a69120d0c85ef64f1f0c7b666f2e8cc1aebe04522be4497042b8c9beb8680e74
MD5 080afa34c84eb4d958d785f619ba0923
BLAKE2b-256 dcacd1427534dd604827ba0b48dbd666e58b451b5c88db4eb75f2d88086daf65

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page