Highly Customizable Agent Library for maximal visibility and control for your Agentic workflows
Project description
Highly customizable Agent Library (HICA)
Move beyond agent frameworks to build production-ready AI systems. HICA gives you complete control over your AI agents' Thought , Action and Observation. This simplicity in-practice looks like a having complete control over prompts, context windows, tool execution, and control flow. Many existing libraries trade convenience over control for reliability—build agents that work in production.
TLDR: Give you visibility and control into every part of you Agent through Stateful Conversations, Human-in-the-Loop by Design, Structured Logging
🎯 Why HICA?
Most agent frameworks are black boxes that work great for demos but fail in production or lock you in with certain vendors. You can't debug decisions, modify prompts, or handle edge cases , change your stack at will when you don't control the fundamentals.
HICA is built on four core principles:
- Control Your Prompts - Own every instruction your agent receives
- Manage Context Windows - Engineer context for maximum efficiency and reliability
- Simplify Tools & Workflows by Atomization - Everything is a
Eventthat includes user input, LLM call, tool call - Own Control Flow - Build custom execution patterns that fit your use case
- Observability & tool - Build you are own observability flow by integrating with existing OpenTelemetry workflows or building your own
Agentic Workflow
Agents work in a continuous cycle of: thinking (Thought) → acting (Act) and observing (Observe).
Thought: The LLM part of the Agent decides what the next step should be. Action: The agent takes an action, by calling the tools with the associated arguments. Observation: The model reflects on the response from the tool.
These three components Thought-Action-Observation work in a continuous loop. When we are building an Agent , it might fail in one of these 3 steps . As long as we control each component of this loop , we can build Agents systematically and reliably.
A generalized Python library for building 12-factor compliant agents, designed to handle tool execution, human interactions, and state management with a modular and extensible architecture.
Features
- Tool Support: Register and execute custom tools (e.g., calculator operations).
- Human Interaction: Handle clarification requests and approvals via CLI or HTTP.
- Thread Management: Maintain conversation state with JSON or XML serialization.
- Customizable Prompts: Configure LLM prompts with reasoning steps.
- HTTP API: Expose agent functionality via FastAPI endpoints.
- State Management: In-memory thread store, extensible to databases.
- Structured Logging: Comprehensive logging with
structlogfor debugging and monitoring. - Statelessness: Externalized state management for scalability.
Usage
Example: Running an Agent with Calculator Tool
Set Environment variables in .env
OPENAI_API_KEY="your-api-key"
Run the Example: The main.py script processes a query ("Calculate 3 plus 4") using the add tool from calculator_tool.py:
src/main.py
import asyncio
import instructor
from openai import AsyncOpenAI
from hica.agent import Agent, AgentConfig
from hica.core import Thread, Event
from hica.state import ThreadStore
from example.calculator_tool import registry as calculator_registry
import structlog
logger = structlog.get_logger()
async def main():
client = instructor.from_openai(AsyncOpenAI())
config = AgentConfig(
model="gpt-4.1-mini",
system_prompt=(
"You are an autonomous agent. Reason carefully to select tools based on their name, description, and parameters. "
"Analyze the user input, identify the required operation, and determine if clarification is needed."
),
context_format="json",
)
agent = Agent(
client=client,
config=config,
tool_registry=calculator_registry,
metadata={"userid": "1234", "role": "analyst"}
)
thread = Thread(events=[Event(type="user_input", data="Calculate 3 plus 4")])
store = ThreadStore()
thread_id = store.create(thread)
updated_thread = await agent.agent_loop(thread)
store.update(thread_id, updated_thread)
logger.info("Thread state", thread_id=thread_id, events=[e.model_dump() for e in updated_thread.events])
if __name__ == "__main__":
asyncio.run(main())
Run the script:
python main.py
Event Output (in context/<thread_id>.json):
{
"events": [
{"type": "user_input", "data": "Calculate 3 plus 4", "timestamp": "..."},
{"type": "llm_response", "data": {"intent": "add"}, "timestamp": "..."},
{"type": "llm_response", "data": {"intent": "add", "arguments": {"a": 3.0, "b": 4.0}}, "timestamp": "..."},
{"type": "tool_call", "data": {"intent": "add", "arguments": {"a": 3.0, "b": 4.0}}, "timestamp": "..."},
{"type": "tool_response", "data": 7.0, "timestamp": "..."},
{"type": "llm_response", "data": {"intent": "done"}, "timestamp": "..."},
{"type": "tool_call", "data": {"intent": "done", "message": "Task completed by agent."}, "timestamp": "..."}
],
"metadata": {"userid": "1234", "role": "analyst"},
"version": 2
}
🚀 MCP (Model Context Protocol) Tool Integration
HICA now supports seamless integration with FastMCP and other MCP-compatible tool servers. You can register and invoke both local Python tools and remote MCP tools in a unified agent workflow.
Key Benefits
- Unified Tool Registry: Register local and remote (MCP) tools together.
- Dynamic Tool Loading: Load tool definitions from any MCP server at runtime.
- LLM-Orchestrated Tool Use: The agent can reason about and call both local and MCP tools in the same workflow.
- Robust Serialization: All tool results (including complex MCP content types) are normalized for logging, storage, and downstream use.
Register MCP tools and run the Agent
# MCP server config
registry = ToolRegistry()
mcp_config = {
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "db.sqlite"],
}
}
}
# Optionally, register local tools as well
@registry.tool()
def add(a: int, b: int) -> int:
return a + b
mcp_manager = MCPConnectionManager(mcp_config)
async def main():
await conn.connect()
await registry.load_mcp_tools(mcp_manager)
agent = Agent(
client=..., # your LLM client
config=AgentConfig(
model="gpt-4.1-mini",
system_prompt="You are an autonomous agent. Reason carefully to select tools based on their name, description, and parameters.",
context_format="json",
),
tool_registry=registry,
metadata={"userid": "1234", "role": "analyst"}
)
thread = Thread(events=[Event(type="user_input", data="List all tables in the database")])
store = ThreadStore()
thread_id = store.create(thread)
updated_thread = await agent.agent_loop(thread)
store.update(thread_id, updated_thread)
print("Thread state:", [e.model_dump() for e in updated_thread.events])
await mcp_manager.close()
if __name__ == "__main__":
asyncio.run(main())
Check out the complete example on /example/main_mcp_tool.py
🤝 Contributing
We welcome contributions from the community! Please see our CONTRIBUTING.md for guidelines on how to report issues, submit pull requests, and get involved.
Feel free to email me if you have questions or suggestions:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hica-0.2.0.tar.gz.
File metadata
- Download URL: hica-0.2.0.tar.gz
- Upload date:
- Size: 123.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
909aae896844b679bf412c330e59e6c00534622bcc9bb5849663b7883fbccb13
|
|
| MD5 |
fee1c7adbbe333a912e714f72b7fcd41
|
|
| BLAKE2b-256 |
44629b9ea931975e7c302d2c178bbf57a434d575de81889c7e9b80361603fb75
|
Provenance
The following attestation bundles were made for hica-0.2.0.tar.gz:
Publisher:
pypi-publish.yml on sandipan1/hica
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hica-0.2.0.tar.gz -
Subject digest:
909aae896844b679bf412c330e59e6c00534622bcc9bb5849663b7883fbccb13 - Sigstore transparency entry: 245830731
- Sigstore integration time:
-
Permalink:
sandipan1/hica@809751b4f4d5fc3e04a2ee5d0e45144dac3b3d09 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/sandipan1
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@809751b4f4d5fc3e04a2ee5d0e45144dac3b3d09 -
Trigger Event:
release
-
Statement type:
File details
Details for the file hica-0.2.0-py3-none-any.whl.
File metadata
- Download URL: hica-0.2.0-py3-none-any.whl
- Upload date:
- Size: 25.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bf4530443d967ef04b143ba35293e24e3ec18d5cdee9acb73cd9c873114380ac
|
|
| MD5 |
1ae9cdad7e9c8de3c4e93e315a77c2a3
|
|
| BLAKE2b-256 |
a98ac4d11121880614248ccd68e9342dd8c7e1c46a327fa1bd74478535358d9d
|
Provenance
The following attestation bundles were made for hica-0.2.0-py3-none-any.whl:
Publisher:
pypi-publish.yml on sandipan1/hica
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hica-0.2.0-py3-none-any.whl -
Subject digest:
bf4530443d967ef04b143ba35293e24e3ec18d5cdee9acb73cd9c873114380ac - Sigstore transparency entry: 245830732
- Sigstore integration time:
-
Permalink:
sandipan1/hica@809751b4f4d5fc3e04a2ee5d0e45144dac3b3d09 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/sandipan1
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pypi-publish.yml@809751b4f4d5fc3e04a2ee5d0e45144dac3b3d09 -
Trigger Event:
release
-
Statement type: