Skip to main content

LlamaIndex tool for x402 service discovery — find any x402-payable API endpoint at runtime

Project description

llama-index-x402-discovery

LlamaIndex tool for x402 service discovery — let your agent find and call any paid API endpoint at runtime without hardcoding URLs or API keys.

When your LlamaIndex agent needs web search, data enrichment, image analysis, or any external capability, it calls x402_discover to find the best available service from the live x402 discovery catalog, then calls that endpoint directly using the returned code snippet.

Installation

pip install llama-index-x402-discovery

Quick Start

from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index_x402_discovery import get_x402_discovery_tool

# Get the x402 FunctionTool
x402_tool = get_x402_discovery_tool()

# Build a ReAct agent with x402 discovery
llm = OpenAI(model="gpt-4o")
agent = ReActAgent.from_tools([x402_tool], llm=llm, verbose=True)

response = agent.chat("Find the cheapest web search API available right now")
print(response)

Full Working Example

from llama_index.core.agent import ReActAgent, FunctionCallingAgent
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index_x402_discovery import get_x402_discovery_tool, x402_discover

# Option 1: Get the pre-built FunctionTool
x402_tool = get_x402_discovery_tool()

# Option 2: Wrap the function yourself with custom metadata
x402_tool = FunctionTool.from_defaults(
    fn=x402_discover,
    name="x402_discover",
    description=(
        "Find paid API services from the x402 catalog. "
        "Call this before using any external API. "
        "Returns endpoint URL, price, and Python usage snippet."
    )
)

# Add to any LlamaIndex agent
llm = OpenAI(model="gpt-4o", temperature=0)

# Works with ReActAgent
agent = ReActAgent.from_tools(
    tools=[x402_tool],
    llm=llm,
    verbose=True,
    system_prompt=(
        "You are an autonomous research assistant. "
        "Before calling any paid API, use x402_discover to find "
        "the best available service and get its endpoint URL and code snippet."
    )
)

# Query the agent
response = agent.chat(
    "I need to extract named entities from text. Find an NLP API that costs less than $0.05/call."
)
print(str(response))

Using with a Query Engine Agent

from llama_index.core.agent import ReActAgent
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
from llama_index_x402_discovery import get_x402_discovery_tool

# Load your documents
documents = SimpleDirectoryReader("./docs").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

# Convert query engine to tool
from llama_index.core.tools import QueryEngineTool
doc_tool = QueryEngineTool.from_defaults(
    query_engine=query_engine,
    name="document_search",
    description="Search internal documents"
)

# Combine with x402 discovery for external APIs
x402_tool = get_x402_discovery_tool()

agent = ReActAgent.from_tools(
    tools=[doc_tool, x402_tool],
    llm=OpenAI(model="gpt-4o"),
    verbose=True
)

response = agent.chat("Summarize our Q3 docs and find a translation API to localize the summary")
print(str(response))

Function Parameters

The x402_discover function accepts:

Parameter Type Default Description
query str required What capability you need (e.g. 'web search', 'image generation')
max_price_usd float 0.50 Maximum acceptable price per call in USD
network str "base" Blockchain network: base, ethereum, or solana

Tool Output

Returns a formatted string containing:

  • Service name and endpoint URL
  • Price per call in USD
  • Uptime % and average latency (ms)
  • Description of the service
  • Python code snippet showing how to call the endpoint

Example output:

Service: Weather Data Pro
URL: https://api.example.com/weather
Price: $0.002/call
Uptime: 99.9% | Latency: 85ms
Description: Real-time weather data with hourly forecasts
Snippet:
import requests
resp = requests.get("https://api.example.com/weather",
    headers={"X-Payment": "<x402-token>"},
    params={"location": "New York"})
print(resp.json())

Calling x402_discover Directly

You can also call the discovery function directly without building a full agent:

from llama_index_x402_discovery import x402_discover

# Find an image generation service
result = x402_discover(query="image generation", max_price_usd=0.10)
print(result)

# Find a translation API on Ethereum
result = x402_discover(query="text translation", max_price_usd=0.05, network="ethereum")
print(result)

How It Works

  1. Agent receives a task requiring an external capability
  2. Agent calls x402_discover with a description of what it needs
  3. The tool fetches the x402 discovery catalog
  4. Services are filtered by max_price_usd and ranked by uptime/latency
  5. The best match is returned with endpoint URL and usage code
  6. Agent uses the code snippet to call the service directly

Discovery API

Browse all available services: https://x402-discovery-api.onrender.com

  • GET /catalog — List all registered x402 services with quality metrics
  • GET /discover?q=<query> — Search by capability description
  • GET /health — API health check

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_x402_discovery-1.0.1.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_x402_discovery-1.0.1-py3-none-any.whl (5.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_x402_discovery-1.0.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_x402_discovery-1.0.1.tar.gz
Algorithm Hash digest
SHA256 66d5d3151e990b0122606be9522de19eacee36a05e6f3dd21f7a22fcfef89a98
MD5 19f28044719b97e53e7efb6e7bda4dd3
BLAKE2b-256 d8d989e4115180d61cdf020ccd4da88de22c59f744389845c241e6e8c962a1d1

See more details on using hashes here.

File details

Details for the file llama_index_x402_discovery-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_x402_discovery-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 573bc42343aa0d668cba247ce41f6ad067b157358c927606137a2dbbcb4d7736
MD5 84d51e1527bcb0cbad7b018c575664ff
BLAKE2b-256 0f629e7620d1c30dd779ea0b49236b4c61033ffe08cf84e5e3348129fd420fba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page