Skip to main content

A flexible API client and server utilities package

Project description

LlamaAPI

Flexible API client and server utilities for LlamaSearch.ai applications.

Features

API Client

  • Flexible Authentication: Support for API keys, Bearer tokens, Basic auth, and OAuth2
  • Middleware Pipeline: Customize request/response handling with middleware
  • Caching: Built-in caching system with memory, file, and Redis backends
  • Error Handling: Comprehensive error types and handling
  • Retry Logic: Automatic retry with configurable backoff strategies
  • Streaming Support: Efficient handling of large data streams
  • Rate Limiting: Client-side rate limiting to avoid API throttling

API Server

  • Route Management: Easy-to-use decorators for defining API routes
  • Request Validation: JSON schema validation for requests
  • Middleware Support: Global and route-specific middleware
  • Error Handling: Comprehensive error handling with custom handlers
  • Authentication: Built-in authentication utilities
  • CORS Support: Simple CORS configuration
  • Async Support: First-class support for async/await

Installation

pip install llamaapi

Client Example

from llamaapi import create_client, ApiKeyAuth, LoggingMiddleware

# Create API client with API key authentication
client = create_client(
    base_url="https://api.example.com/v1",
    auth=ApiKeyAuth("your-api-key"),
    middleware=[LoggingMiddleware()],
    timeout=10,
    retries=3,
)

# Make GET request
response = client.get("users")
response.raise_for_status()
users = response.json()
print(f"Found {len(users)} users")

# Make POST request
new_user = {"name": "John Doe", "email": "john.doe@example.com"}
response = client.post("users", json=new_user)
response.raise_for_status()
created_user = response.json()
print(f"Created user: {created_user['name']}")

Server Example

from llamaapi import create_api, Request, Response, HttpMethod

# Create API instance
api = create_api(name="My API", version="1.0.0")

# Define a route
@api.route("/users", methods=HttpMethod.GET)
async def get_users(request: Request) -> Response:
    # Get data from your data source
    users = [{"id": 1, "name": "John"}, {"id": 2, "name": "Jane"}]
    return Response().with_json(users)

# Define a route with path parameters
@api.route("/users/{user_id}", methods=HttpMethod.GET)
async def get_user(request: Request) -> Response:
    user_id = request.path_params.get("user_id")
    
    # Get user data from your data source
    user = {"id": user_id, "name": "John Doe"}
    
    return Response().with_json(user)

For more detailed examples, see the examples directory.

Authentication

API Key Auth

from llamaapi import create_client, ApiKeyAuth

client = create_client(
    base_url="https://api.example.com",
    auth=ApiKeyAuth(api_key="your-api-key"),
)

Bearer Token Auth

from llamaapi import create_client, BearerAuth

client = create_client(
    base_url="https://api.example.com",
    auth=BearerAuth(token="your-token"),
)

OAuth2 Auth

from llamaapi import create_client, OAuth2Auth

auth = OAuth2Auth(
    token_url="https://auth.example.com/oauth/token",
    client_id="your-client-id",
    client_secret="your-client-secret",
    scope="read write",
)

# Use client credentials flow to get a token
auth.client_credentials_flow()

client = create_client(
    base_url="https://api.example.com",
    auth=auth,
)

Advanced Client Features

Middleware

from llamaapi import (
    create_client, 
    LoggingMiddleware, 
    RetryMiddleware, 
    HeadersMiddleware,
)

middleware = [
    LoggingMiddleware(log_headers=True),
    RetryMiddleware(max_retries=3),
    HeadersMiddleware({"User-Agent": "MyApp/1.0"}),
]

client = create_client(
    base_url="https://api.example.com",
    middleware=middleware,
)

Caching

from llamaapi import create_client, MemoryCache, FileCache

# Memory cache
client = create_client(
    base_url="https://api.example.com",
    cache=MemoryCache(max_size=100),
)

# File cache
client = create_client(
    base_url="https://api.example.com",
    cache=FileCache(cache_dir=".cache"),
)

Streaming

from llamaapi import create_client

client = create_client(base_url="https://api.example.com")

# Stream large data
with client.stream("GET", "large-dataset") as response:
    for chunk in response.iter_content(chunk_size=1024):
        process_chunk(chunk)

Advanced Server Features

Request Validation

from llamaapi import api, validate_json_schema

user_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string", "minLength": 2},
        "email": {"type": "string", "format": "email"},
    },
    "required": ["name", "email"],
}

@api.route(
    "/users", 
    methods=HttpMethod.POST,
    middleware=[validate_json_schema(user_schema)]
)
async def create_user(request: Request) -> Response:
    user_data = request.json()
    # Create user with validated data
    return Response(status_code=201).with_json(new_user)

Authentication

from llamaapi import api, require_auth

# Define authentication middleware
async def auth_middleware(request: Request) -> Request:
    api_key = request.headers.get("X-API-Key")
    if api_key == "secret-key":
        request.context["user"] = {"id": "admin", "role": "admin"}
    return request

api.add_middleware(auth_middleware)

# Require authentication for specific routes
@api.route("/admin-only", methods=HttpMethod.GET)
@require_auth
async def admin_only(request: Request) -> Response:
    user = request.context["user"]  # This is guaranteed to exist
    return Response().with_json({"message": f"Hello, {user['id']}"})

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamaapi_llamasearch-0.1.0.tar.gz (34.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llamaapi_llamasearch-0.1.0-py3-none-any.whl (33.5 kB view details)

Uploaded Python 3

File details

Details for the file llamaapi_llamasearch-0.1.0.tar.gz.

File metadata

  • Download URL: llamaapi_llamasearch-0.1.0.tar.gz
  • Upload date:
  • Size: 34.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for llamaapi_llamasearch-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f4fceed6eec5e9626c07bcc2a3fe2e0baf55ce3f0406d2d859285e8b1e857b1a
MD5 188d1458efa914e2e42bee7e73bc489b
BLAKE2b-256 7e0b7b6af60fbf433b2935578afff17f536a5a831e76a552ddb425df6cb38c0b

See more details on using hashes here.

File details

Details for the file llamaapi_llamasearch-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llamaapi_llamasearch-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 083503664317b3d725470c265b2b53877fba2ab599b743975d79ad24796df7df
MD5 31469068a8b3923a831764a328e5b4fc
BLAKE2b-256 f8465959f8b187c4e1af604d83748d8c92603208a93435d149163e2b164966b3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page