Skip to main content

Skill-driven agent toolkit for LangGraph with semantic skill discovery

Project description

langchain-skillkit

Skill-driven agent toolkit for LangGraph with semantic skill discovery.

PyPI version Python License: MIT

Give your LangGraph agents reusable, discoverable skills defined as markdown files. Two paths to use: SkillKit as a standalone toolkit you wire yourself, or the node metaclass that gives you a complete ReAct subgraph with dependency injection.

Table of Contents

Installation & Quick Start

Requires Python 3.11+, langchain-core>=0.3, langgraph>=0.4.

pip install langchain-skillkit

Skills follow the AgentSkills.io specification — each skill is a directory with a SKILL.md and optional reference files:

skills/
  market-sizing/
    SKILL.md                # Instructions + frontmatter (name, description)
    calculator.py           # Template — loaded on demand via SkillRead
  competitive-analysis/
    SKILL.md
    swot-template.md        # Reference doc — loaded on demand via SkillRead
    examples/
      output.json           # Example output
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langchain_skillkit import node, AgentState

# --- Define tools ---

@tool
def web_search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

# --- Declare an agent ---
# Subclassing `node` produces a StateGraph, not a class.
# Call .compile() to get a runnable graph.

class researcher(node):
    llm = ChatOpenAI(model="gpt-4o")
    tools = [web_search]
    skills = "skills/"

    async def handler(state, *, llm):
        response = await llm.ainvoke(state["messages"])
        return {"messages": [response], "sender": "researcher"}

# --- Compile and use standalone ---

graph = researcher.compile()
result = graph.invoke({"messages": [HumanMessage("Size the B2B SaaS market")]})

# --- Or compose into a parent graph ---

workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher.compile())
workflow.add_edge(START, "researcher")
workflow.add_edge("researcher", END)
graph = workflow.compile()

Examples

See examples/ for complete working code:

API Reference

SkillKit(skills_dirs)

Toolkit that provides Skill and SkillRead tools.

from langchain_skillkit import SkillKit

kit = SkillKit("skills/")
all_tools = [web_search] + kit.tools  # [web_search, Skill, SkillRead]

Parameters:

  • skills_dirs (str | list[str]): Directory or list of directories containing skill subdirectories

Properties:

Property Type Description
tools list[BaseTool] [Skill, SkillRead] — built once, cached

node

Declarative agent builder. Subclassing produces a StateGraph. Call .compile() to get a runnable graph.

from langchain_skillkit import node

class my_agent(node):
    llm = ChatOpenAI(model="gpt-4o")    # Required
    tools = [web_search]                  # Optional
    skills = "skills/"                    # Optional

    async def handler(state, *, llm):
        response = await llm.ainvoke(state["messages"])
        return {"messages": [response], "sender": "my_agent"}

graph = my_agent.compile()
graph.invoke({"messages": [HumanMessage("...")]})

Compile with a checkpointer for interrupt() support:

from langgraph.checkpoint.memory import InMemorySaver

graph = my_agent.compile(checkpointer=InMemorySaver())

Class attributes:

Attribute Required Description
llm Yes Language model instance
tools No List of LangChain tools
skills No Path(s) to skill directories, or a SkillKit instance

Handler signature:

async def handler(state, *, llm, tools, runtime): ...

state is positional. Everything after * is keyword-only and injected by name — declare only what you need:

Parameter Type Description
state dict LangGraph state (positional, required)
llm BaseChatModel LLM pre-bound with all tools via bind_tools()
tools list[BaseTool] All tools available to the agent
runtime Any LangGraph runtime context (passed through from config)

Custom state types — annotate the handler's state parameter:

from typing import Annotated, TypedDict
from langgraph.graph.message import add_messages

class WorkflowState(TypedDict, total=False):
    messages: Annotated[list, add_messages]
    draft: dict | None

class my_agent(node):
    llm = ChatOpenAI(model="gpt-4o")

    async def handler(state: WorkflowState, *, llm):
        response = await llm.ainvoke(state["messages"])
        return {"messages": [response]}

Without an annotation, AgentState is used by default.

AgentState

Minimal LangGraph state type for composing nodes in a parent graph:

from langchain_skillkit import AgentState
from langgraph.graph import StateGraph

workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher)

Extend it with your own fields:

class MyState(AgentState):
    current_project: str
    iteration_count: int
Field Type Description
messages Annotated[list, add_messages] Conversation history with LangGraph message reducer
sender str Name of the last node that produced output

Security

  • Path traversal prevention: File paths resolved to absolute and checked against skill directories.
  • Name validation: Skill names validated per AgentSkills.io spec — lowercase alphanumeric + hyphens, 1-64 chars, must match directory name.
  • Tool scoping: Each node subclass only has access to the tools declared in its tools attribute.

Why This Toolkit?

Developers building multi-agent LangGraph systems face these problems:

  1. Prompt reuse is manual. The same domain instructions get copy-pasted across agents with no versioning or structure.
  2. Agents lack discoverability. There's no standard way for an LLM to find and select relevant instructions at runtime.
  3. Agent wiring is repetitive. Every ReAct agent needs the same graph boilerplate: handler node, tool node, conditional edges.
  4. Reference files are inaccessible. Templates, scripts, and examples referenced in prompts can't be loaded on demand.

This toolkit solves all four with:

  • Skill-as-markdown: reusable instructions with structured frontmatter
  • Semantic discovery: the LLM matches user intent to skill descriptions at runtime
  • Declarative agents: class my_agent(node) gives you a complete ReAct subgraph
  • On-demand file loading: SkillRead lets the LLM pull reference files when needed
  • AgentSkills.io spec compliance: portable skills that work across toolkits
  • Full type safety: mypy strict mode support

Contributing

This toolkit is extracted from a production codebase and is actively maintained. Issues, feature requests, and pull requests are welcome.

git clone https://github.com/rsmdt/langchain-skillkit.git
cd langchain-skillkit
uv sync --extra dev
uv run pytest --tb=short -q
uv run ruff check src/ tests/
uv run mypy src/

GitHub: https://github.com/rsmdt/langchain-skillkit

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_skillkit-0.3.0.tar.gz (109.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_skillkit-0.3.0-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_skillkit-0.3.0.tar.gz.

File metadata

  • Download URL: langchain_skillkit-0.3.0.tar.gz
  • Upload date:
  • Size: 109.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for langchain_skillkit-0.3.0.tar.gz
Algorithm Hash digest
SHA256 0bf02638713be36c7c58d03511c3d8d7df109b7c77b1d4bbc2e8f2a72822f40e
MD5 ce64ef979c7abd739c765df37a0a8d23
BLAKE2b-256 4797a73c57ed85d4801d98ff60eb9ed8642bc89d41bde0302c9494dcd4dd6a58

See more details on using hashes here.

File details

Details for the file langchain_skillkit-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: langchain_skillkit-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for langchain_skillkit-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5153ab125fce1c2c29ac913814a30036433d8a058982fa607661b33e74abea01
MD5 09b618dea575a80c6900d5749c01d847
BLAKE2b-256 088ed03fcf57fa41b1ea1dfcd7209ee30f06ed9f4a3b4a64840f7df5741e047f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page