Skill-driven agent toolkit for LangGraph with semantic skill discovery
Project description
langchain-skillkit
Skill-driven agent toolkit for LangGraph with semantic skill discovery.
Give your LangGraph agents reusable, discoverable skills defined as markdown files. Two paths to use: SkillKit as a standalone toolkit you wire yourself, or the node metaclass that gives you a complete ReAct subgraph with dependency injection.
Table of Contents
Installation & Quick Start
Requires Python 3.11+, langchain-core>=0.3, langgraph>=0.4.
pip install langchain-skillkit
Skills follow the AgentSkills.io specification — each skill is a directory with a SKILL.md and optional reference files:
skills/
market-sizing/
SKILL.md # Instructions + frontmatter (name, description)
calculator.py # Template — loaded on demand via SkillRead
competitive-analysis/
SKILL.md
swot-template.md # Reference doc — loaded on demand via SkillRead
examples/
output.json # Example output
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langchain_skillkit import node, AgentState
# --- Define tools ---
@tool
def web_search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
# --- Declare an agent ---
# Subclassing `node` produces a StateGraph, not a class.
# Call .compile() to get a runnable graph.
class researcher(node):
llm = ChatOpenAI(model="gpt-4o")
tools = [web_search]
skills = "skills/"
async def handler(state, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response], "sender": "researcher"}
# --- Compile and use standalone ---
graph = researcher.compile()
result = graph.invoke({"messages": [HumanMessage("Size the B2B SaaS market")]})
# --- Or compose into a parent graph ---
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher.compile())
workflow.add_edge(START, "researcher")
workflow.add_edge("researcher", END)
graph = workflow.compile()
Examples
See examples/ for complete working code:
standalone_node.py— Simplest usage: declare a node class, compile, invokemanual_wiring.py— UseSkillKitas a standalone toolkit with full graph controlmulti_agent.py— Compose multiple agents in a parent graphroot_with_checkpointer.py— Multi-turn conversations withinterrupt()andCommand(resume=...)subgraph_with_checkpointer.py— Subgraph inherits parent's checkpointer automaticallycustom_state_type.py— Custom state shape via handler annotation + subgraph schema translation
API Reference
SkillKit(skills_dirs)
Toolkit that provides Skill and SkillRead tools.
from langchain_skillkit import SkillKit
kit = SkillKit("skills/")
all_tools = [web_search] + kit.tools # [web_search, Skill, SkillRead]
Parameters:
skills_dirs(str | list[str]): Directory or list of directories containing skill subdirectories
Properties:
| Property | Type | Description |
|---|---|---|
tools |
list[BaseTool] |
[Skill, SkillRead] — built once, cached |
node
Declarative agent builder. Subclassing produces a StateGraph. Call .compile() to get a runnable graph.
from langchain_skillkit import node
class my_agent(node):
llm = ChatOpenAI(model="gpt-4o") # Required
tools = [web_search] # Optional
skills = "skills/" # Optional
async def handler(state, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response], "sender": "my_agent"}
graph = my_agent.compile()
graph.invoke({"messages": [HumanMessage("...")]})
Compile with a checkpointer for interrupt() support:
from langgraph.checkpoint.memory import InMemorySaver
graph = my_agent.compile(checkpointer=InMemorySaver())
Class attributes:
| Attribute | Required | Description |
|---|---|---|
llm |
Yes | Language model instance |
tools |
No | List of LangChain tools |
skills |
No | Path(s) to skill directories, or a SkillKit instance |
Handler signature:
async def handler(state, *, llm, tools, runtime): ...
state is positional. Everything after * is keyword-only and injected by name — declare only what you need:
| Parameter | Type | Description |
|---|---|---|
state |
dict |
LangGraph state (positional, required) |
llm |
BaseChatModel |
LLM pre-bound with all tools via bind_tools() |
tools |
list[BaseTool] |
All tools available to the agent |
runtime |
Any |
LangGraph runtime context (passed through from config) |
Custom state types — annotate the handler's state parameter:
from typing import Annotated, TypedDict
from langgraph.graph.message import add_messages
class WorkflowState(TypedDict, total=False):
messages: Annotated[list, add_messages]
draft: dict | None
class my_agent(node):
llm = ChatOpenAI(model="gpt-4o")
async def handler(state: WorkflowState, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response]}
Without an annotation, AgentState is used by default.
AgentState
Minimal LangGraph state type for composing nodes in a parent graph:
from langchain_skillkit import AgentState
from langgraph.graph import StateGraph
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher)
Extend it with your own fields:
class MyState(AgentState):
current_project: str
iteration_count: int
| Field | Type | Description |
|---|---|---|
messages |
Annotated[list, add_messages] |
Conversation history with LangGraph message reducer |
sender |
str |
Name of the last node that produced output |
Security
- Path traversal prevention: File paths resolved to absolute and checked against skill directories.
- Name validation: Skill names validated per AgentSkills.io spec — lowercase alphanumeric + hyphens, 1-64 chars, must match directory name.
- Tool scoping: Each
nodesubclass only has access to the tools declared in itstoolsattribute.
Why This Toolkit?
Developers building multi-agent LangGraph systems face these problems:
- Prompt reuse is manual. The same domain instructions get copy-pasted across agents with no versioning or structure.
- Agents lack discoverability. There's no standard way for an LLM to find and select relevant instructions at runtime.
- Agent wiring is repetitive. Every ReAct agent needs the same graph boilerplate: handler node, tool node, conditional edges.
- Reference files are inaccessible. Templates, scripts, and examples referenced in prompts can't be loaded on demand.
This toolkit solves all four with:
- Skill-as-markdown: reusable instructions with structured frontmatter
- Semantic discovery: the LLM matches user intent to skill descriptions at runtime
- Declarative agents:
class my_agent(node)gives you a complete ReAct subgraph - On-demand file loading:
SkillReadlets the LLM pull reference files when needed - AgentSkills.io spec compliance: portable skills that work across toolkits
- Full type safety: mypy strict mode support
Contributing
This toolkit is extracted from a production codebase and is actively maintained. Issues, feature requests, and pull requests are welcome.
git clone https://github.com/rsmdt/langchain-skillkit.git
cd langchain-skillkit
uv sync --extra dev
uv run pytest --tb=short -q
uv run ruff check src/ tests/
uv run mypy src/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_skillkit-0.3.0.tar.gz.
File metadata
- Download URL: langchain_skillkit-0.3.0.tar.gz
- Upload date:
- Size: 109.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0bf02638713be36c7c58d03511c3d8d7df109b7c77b1d4bbc2e8f2a72822f40e
|
|
| MD5 |
ce64ef979c7abd739c765df37a0a8d23
|
|
| BLAKE2b-256 |
4797a73c57ed85d4801d98ff60eb9ed8642bc89d41bde0302c9494dcd4dd6a58
|
File details
Details for the file langchain_skillkit-0.3.0-py3-none-any.whl.
File metadata
- Download URL: langchain_skillkit-0.3.0-py3-none-any.whl
- Upload date:
- Size: 14.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5153ab125fce1c2c29ac913814a30036433d8a058982fa607661b33e74abea01
|
|
| MD5 |
09b618dea575a80c6900d5749c01d847
|
|
| BLAKE2b-256 |
088ed03fcf57fa41b1ea1dfcd7209ee30f06ed9f4a3b4a64840f7df5741e047f
|