Skip to main content

A lightweight Skills execution engine with LLM integration for LLM agents

Project description

SkillLite Python SDK

A Python SDK for the SkillLite execution engine, using OpenAI-compatible API format as the unified interface.

Supported Providers

Works with any OpenAI-compatible LLM provider:

  • OpenAI (GPT-4, GPT-3.5, etc.)
  • Azure OpenAI
  • Anthropic Claude (via OpenAI-compatible endpoint or native)
  • Google Gemini (via OpenAI-compatible endpoint)
  • Local models (Ollama, vLLM, LMStudio, etc.)
  • DeepSeek, Qwen, Moonshot, Zhipu, and other providers

Installation

pip install skilllite

# With OpenAI SDK (recommended, works with all compatible providers)
pip install skilllite[openai]

# With Anthropic SDK (for Claude's native API)
pip install skilllite[anthropic]

Prerequisites

You need to have the skillbox binary installed:

# From the skillbox directory
cargo install --path .

Quick Start

Basic Usage (Universal - Works with Any Provider)

from openai import OpenAI
from skilllite import SkillManager

# Works with ANY OpenAI-compatible provider
# Just change base_url and api_key for different providers:

# OpenAI
client = OpenAI()

# Ollama (local)
# client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")

# DeepSeek
# client = OpenAI(base_url="https://api.deepseek.com/v1", api_key="...")

# Qwen (通义千问)
# client = OpenAI(base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", api_key="...")

# Moonshot (月之暗面)
# client = OpenAI(base_url="https://api.moonshot.cn/v1", api_key="...")

# Initialize SkillManager
manager = SkillManager(skills_dir="./my_skills")

# Get tools (OpenAI-compatible format - works with all providers)
tools = manager.get_tools()

# Call any OpenAI-compatible API
response = client.chat.completions.create(
    model="gpt-4",  # or "llama2", "deepseek-chat", "qwen-turbo", etc.
    tools=tools,
    messages=[{"role": "user", "content": "Please help me with..."}]
)

# Handle tool calls (same code works for all providers!)
if response.choices[0].message.tool_calls:
    tool_results = manager.handle_tool_calls(response)
    
    # Continue conversation with results
    messages = [
        {"role": "user", "content": "Please help me with..."},
        response.choices[0].message,
        *[r.to_openai_format() for r in tool_results]
    ]
    
    follow_up = client.chat.completions.create(
        model="gpt-4",
        tools=tools,
        messages=messages
    )

Agentic Loop (Automatic Multi-turn Tool Execution)

from openai import OpenAI
from skilllite import SkillManager

# Works with any provider
client = OpenAI()  # or OpenAI(base_url="...", api_key="...")
manager = SkillManager(skills_dir="./my_skills")

# Create an agentic loop
loop = manager.create_agentic_loop(
    client=client,
    model="gpt-4",
    system_prompt="You are a helpful assistant with access to various skills.",
    max_iterations=10,
    temperature=0.7  # Additional kwargs passed to chat.completions.create()
)

# Run until completion - handles multiple tool calls automatically
final_response = loop.run("Please analyze this data and generate a report.")
print(final_response.choices[0].message.content)

Claude Native API (Optional)

If you prefer using Claude's native API directly:

import anthropic
from skilllite import SkillManager

client = anthropic.Anthropic()
manager = SkillManager(skills_dir="./my_skills")

# Use Claude-specific methods
tools = manager.get_tools_for_claude_native()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=4096,
    tools=tools,
    messages=[{"role": "user", "content": "..."}]
)

if response.stop_reason == "tool_use":
    results = manager.handle_tool_calls_claude_native(response)

Creating Skills

Skills are defined in directories with a SKILL.md file:

my_skills/
├── web-search/
│   ├── SKILL.md           # Metadata and docs (includes dependency declaration)
│   └── scripts/
│       └── main.py
└── calculator/
    ├── SKILL.md
    └── scripts/
        └── main.py

Note: Python dependencies are declared in the compatibility field of SKILL.md, not in a separate requirements.txt file.

SKILL.md Format

---
name: web-search
description: Search the web for information
compatibility: Requires Python 3.x with requests library, network access
license: MIT
metadata:
  author: example-org
  version: "1.0"
---

# Web Search Skill

This skill searches the web for information.

## Input Parameters

- `query`: The search query (required)

The compatibility field is used to:

  • Detect language (Python/Node/Bash)
  • Enable network access (keywords: network, internet, http, api, web)
  • Auto-install dependencies (known packages like requests, pandas, axios, etc.)

Skill Entry Point

# main.py
import json
import sys

def main():
    # Read input from stdin
    input_data = json.loads(sys.stdin.read())
    
    # Process the input
    query = input_data.get("query", "")
    
    # Do something...
    result = {"results": [f"Result for: {query}"]}
    
    # Output JSON to stdout
    print(json.dumps(result))

if __name__ == "__main__":
    main()

API Reference

SkillManager

The main class for managing and executing skills.

Constructor

SkillManager(
    skills_dir: Optional[str] = None,  # Directory containing skills
    binary_path: Optional[str] = None,  # Path to skillbox binary
    cache_dir: Optional[str] = None,    # Cache directory for venvs
    allow_network: bool = False          # Default network access
)

Methods

  • scan_directory(directory) - Scan for skills
  • register_skill(skill_dir) - Register a single skill
  • get_skill(name) - Get skill by name
  • list_skills() - List all skills
  • get_tools_for_claude() - Get Claude-format tools
  • get_tools_for_openai() - Get OpenAI-format tools
  • execute(skill_name, input_data) - Execute a skill
  • handle_tool_calls(response, format) - Handle LLM tool calls
  • create_agentic_loop(...) - Create an agentic loop

ToolFormat

Enum for LLM provider formats:

  • ToolFormat.CLAUDE
  • ToolFormat.OPENAI

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skilllite-0.1.0.tar.gz (66.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skilllite-0.1.0-py3-none-any.whl (75.5 kB view details)

Uploaded Python 3

File details

Details for the file skilllite-0.1.0.tar.gz.

File metadata

  • Download URL: skilllite-0.1.0.tar.gz
  • Upload date:
  • Size: 66.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for skilllite-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6a94cc9b40f37c06a607e0cda81b547f99d144b5de8acdae1fbca2407c13999e
MD5 34885ecddaa1454410b45735d600a515
BLAKE2b-256 dcbb5e7df4a0e6d0d16181de24568c7e71f549002499544c55d0aea2a17512c6

See more details on using hashes here.

File details

Details for the file skilllite-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: skilllite-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 75.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for skilllite-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 33837191857b2b4efffcff3e719fafe7f6d460b0ff02f12d0488a276ada37a41
MD5 b21565299d7809ba02e0dce4712a111c
BLAKE2b-256 02ae4af38aab44f1cadbec24aa02ec143ba6ba2ccfdf1470dc1ca60bd131ded8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page