10 projects
flex-evals
A flexible evaluation framework for testing and validating AI systems, LLMs, and APIs with structured checks and async support
sik-llms
Lightweight, easy, and consistent LLM-interface across providers and functionality.
mcp-this
MCP Server that exposes CLI commands as tools for Claude using YAML configuration files
mcp-this-openapi
MCP Server that creates tools from OpenAPI/Swagger specifications
sik-llm-eval
sik-llm-eval is a simple, yet flexible, framework primarily designed for evaluating Language Model Models (LLMs) on custom use cases.
sik-stochastic-tests
A pytest plugin for testing stochastic systems like LLMs, providing statistical confidence through multiple test runs.
helpsk
Python helper functions and classes.
llm-evaluations
placeholder
llm-workflow
build LLM workflows
oolearning
A simple machine learning library based on Object Oriented design principles.