Skip to main content

https://github.com/l3yx/intentlang

Project description

intentlang_logo

Intent-Driven Embedded AI Programming Framework

Version Downloads LICENSE python

"True innovation lies not in better imitation, but in deeper understanding."

IntentLang is a lightweight AI agent framework that breaks the boundary between AI and programs. Instead of limiting AI to predefined function calls, it embraces "Code as Intent".

💡 Why IntentLang?

IntentLang fundamentally rethinks how AI agents work:

Traditional Agents IntentLang
🔧 Discrete function calls 🎯 Continuous code execution
📦 Data in context 🔗 Data in runtime (Schema-only)
🔌 Tools as external APIs 🧬 Tools as embedded objects
📝 Prompt engineering 🏗️ Intent engineering
1. Precise Modeling and Computation of Intent: Human Intent as a First-Class Citizen

IntentLang is the first framework to formally represent human intent (Goal, Contexts, Tools, Input, Strategy, Constraints, Output). These elements are transformed into an XML-based Intent IR (Intermediate Representation), guiding the LLM to progressively generate code and realize the computation and iterative convergence of human intent. This is not mere prompt engineering, but a systematic construction of intent engineering, allowing expert knowledge to be accumulated in a reusable and verifiable form.

2. Paradigm Shift: From Constrained “Function Calling” to Free “Code Execution”

IntentLang completely abandons the discrete, inefficient, and strictly schema-constrained Function Calling paradigm found in mainstream frameworks. Instead of confining AI to predefined tool functions, we empower it to generate and execute Python code directly. This means AI expression and operation are no longer isolated, atomic calls, but a continuous, stateful, and Turing-complete computational process.

3. Separation of Data and Instructions: A Fundamental Break from Context Limitations

In IntentLang, input data (regardless of size) is not serialized and injected into the LLM context. The AI receives only metadata about the data objects (names and descriptions). It must generate code to access these in-memory objects on demand at runtime. This model fundamentally eliminates token limits and cost issues caused by large inputs in LLM applications.

4. Embedded Execution: Eliminating Tool Invocation Boundaries

You can inject any Python object—whether an initialized database connection, a browser instance, or a complex business model—directly into the AI execution environment as a tool. Code generated by the AI shares the same execution flow and runtime context as the host program, enabling continuous access to object attributes, method invocations, and natural perception and evolution of state. In this model, the AI no longer participates through discrete tool calls, but acts as an embedded execution unit within the program, collaborating with host code to complete computation.

5. Native Python Expression: A “Super DSL” with Zero Learning Cost

IntentLang treats Python itself as its domain-specific language (DSL). There is no need to learn new, complex graph orchestration syntaxes or YAML configurations. If you can write Python, you already know IntentLang. This dramatically lowers the learning barrier while allowing full leverage of Python’s vast and mature ecosystem.

🚀 Quick Start

Installation

IntentLang requires Python 3.10 or higher.

Using pip:

pip install intentlang

Using uv:

uv add intentlang

Configuration

IntentLang uses environment variables for LLM configuration. Create a .env file in your project root directory and add the following settings:

# OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
# OPENAI_BASE_URL=https://open.bigmodel.cn/api/paas/v4
OPENAI_BASE_URL=https://api.deepseek.com
OPENAI_API_KEY=
OPENAI_MODEL_NAME=
OPENAI_EXTRA_BODY={}

Your First Intent

from intentlang import Intent

# Define what you want
intent = Intent().goal("Sum all even numbers").input(
    numbers=([1,2,3,4,5,6], "list of integers")
).output(sum=(int, "sum of even numbers"))

# Execute the computation
result = intent.run_sync()
print(result.output.sum)  # Output: 12
print(result.usage.model_dump_json())

What just happened?

  • ✅ Intent formalized into 7 elements (Goal, Input, Output)

  • ✅ AI generated Python code to solve this

  • ✅ Code executed in runtime, not function calls

  • ✅ Input data never entered LLM context (only schema did)

⚠️ Security Notice

CRITICAL: IntentLang executes AI-generated code in your runtime.

Always run in isolated environments:

  • 🐳 Docker containers

  • 📦 Sandboxed Python environments

  • 🔒 Virtual machines

📖 Examples

🌐 Web Automation: Playwright Continuous Operations

This example demonstrates how to allow the AI to directly operate on a Playwright object initialized by the host program, enabling continuous interaction for opening a web page and extracting its title.

View Code
import asyncio
from intentlang import Intent
from playwright.async_api import async_playwright, Page


async def test():
    playwright = await async_playwright().start()
    browser = await playwright.chromium.launch(headless=False)
    context = await browser.new_context()

    url = "https://l3yx.github.io/"

    intent_a = (
        Intent()
        .goal("Open the url")
        .input(
            # Format: name=(value, "description")
            url=(url, "Target URL to open")
        )
        .tools([
            # Formats: function, or (object, "name", "description")
            (context, "context", "Playwright Context instance (async API)")
        ])
        .output(
            # Format: name=(type, "description")
            page=(Page, "Playwright Page instance")
        )
    )
    page = (await intent_a.run()).output.page
    print(page)

    intent_b = (
        Intent()
        .goal("Get webpage title")
        .input(
            page=(page, "Playwright Page instance")
        )
        .output(
            title=(str, "Webpage title")
        )
    )
    title = (await intent_b.run()).output.title
    print(title)

asyncio.run(test())

📊 Text Analysis: Movie Review Sentiment Classification

This example demonstrates how to handle a semantic recognition task.

View Code
import os
from typing import List, Literal
from pydantic import Field
from intentlang import Intent, LLMConfig, IntentIO
from intentlang.tools import create_reason_func


llm_config = LLMConfig(
    base_url=os.getenv("OPENAI_BASE_URL"),
    api_key=os.getenv("OPENAI_API_KEY"),
    model_name=os.getenv("OPENAI_MODEL_NAME"),
    extra_body=os.getenv("OPENAI_EXTRA_BODY")
)


movie_reviews = """
This movie has a gripping plot with constant twists—my adrenaline was through the roof, highly recommend!
The visual effects are mind-blowing; the VFX team deserves an Oscar—pure audiovisual feast!
The acting is spot-on, especially the protagonist's inner turmoil feels so real—I was in tears.
Pacing is a bit slow, but the philosophy is profound; it leaves you thinking long after, worth a rewatch.
Avoid this one—full of plot holes, logic collapses, wasted two hours of my life.
"""


class ReviewResult(IntentIO):
    review: str = Field(description="Original evaluation content")
    sentiment: Literal["positive", "negative",
                       "mixed"] = Field(description="Sentiment Classification")
    reason: str = Field(description="Classification Reasons")


class Result(IntentIO):
    reviews: List[ReviewResult] = Field(
        description="Sentiment analysis results for each review")


intent = (
    Intent()
    .goal("Sentiment categorization for each movie review")
    .input(
        movie_reviews=(movie_reviews, "Contains multiple reviews, one per line")
    )
    .output(Result)
    .how("Multiple threads analyze each comment simultaneously to determine the sentiment and provide reasons")
    .tools([create_reason_func(llm_config)])
    .rules(["When performing sentiment classification using a multi-threaded concurrency of 5"])
)

result = intent.compile(cache=True).run_sync()
print(result.output.model_dump_json(indent=2))
print(result.usage.model_dump_json())

🛠️ Core Concepts

The core of IntentLang revolves around the construction of the Intent object, which transforms natural language intent into executable Python code and enables seamless collaboration with the host program.

Intent

Definition and Construction

Intent is the minimal logical unit of the IntentLang framework, encapsulating all the information AI needs to complete a specific task. An Intent object is constructed through method chaining, clearly defining each element of the task.

The 7 Defining Elements of Intent:

.goal(goal: str)

Purpose: Clearly describe the ultimate goal that AI needs to achieve. This is the core instruction for the LLM to understand the task.

intent = Intent().goal("Calculate the sum of even numbers")
.ctxs(ctxs: list[str])

Purpose: Provide additional contextual information to the LLM. These are plain text descriptions that help the LLM better understand the task background or domain-specific knowledge.

intent = Intent().ctxs([
    "integers ending in 7 are lucky numbers",
    "4 is considered an unlucky number"
])
.tools(tools: list[Callable | Tuple[object, str, str]])

Purpose: Inject available tools into AI's execution environment. These tools can be regular Python functions or any Python objects already initialized in the host program.

Key Feature: Unlike traditional Function Calling which is limited to functions, IntentLang allows you to inject any Python object as a tool. This means AI can directly call object methods, access object attributes, and achieve object-level continuous operations without serialization and deserialization.

Usage: Accepts a list where elements can be functions or (object, name, description) tuples.

intent = Intent().tools([
    check_lucky_number,  # function
    (browser_context, "context", "Playwright context")  # object
])
.input(input: IntentIO | None = None, **field_definitions)

Purpose: Define the input data that AI can access when executing this intent. This data is passed as references to Python objects.

Key Feature: Input data itself is NOT serialized and sent to the LLM as context. The LLM only receives metadata (name and description) about these data objects. AI must generate Python code to access and manipulate these in-memory objects on-demand.

Usage:

  • Dynamic definition: Define input fields through keyword arguments. Each argument is a (value, description) tuple.
  • Predefined model: Directly pass an instantiated IntentIO subclass object.
# Dynamic definition
intent = Intent().input(
    numbers=([1,2,3,4,5], "list of integers")
)

# Predefined model
class MyInput(IntentIO):
    numbers: list[int]
    
intent = Intent().input(MyInput(numbers=[1,2,3,4,5]))
.how(how: str)

Purpose: Provide high-level strategy or implementation approach on how to achieve the goal. It guides the LLM to follow specific methodologies when generating code.

intent = Intent().how("Process each item one by one")
.rules(rules: list[str])

Purpose: Set specific constraints or behavioral guidelines that must be followed during execution. These rules help the LLM refine its code generation logic and ensure outputs meet expectations.

intent = Intent().rules([
    "Must validate all inputs before processing",
    "Handle errors gracefully"
])
.output(output: Type[IntentIO] | None = None, **field_definitions)

Purpose: Define the result structure that AI needs to produce after successfully completing the intent. It forces AI to return data in the expected Pydantic model format, ensuring structured and verifiable output.

Usage:

  • Dynamic definition: Define output fields through keyword arguments. Each argument is a (type, description) tuple.
  • Predefined model: Directly pass a Pydantic model class that inherits from IntentIO.
# Dynamic definition
intent = Intent().output(
    sum=(int, "sum of even numbers")
)

# Predefined model
class Result(IntentIO):
    sum: int
    count: int
    
intent = Intent().output(Result)

Compilation and Execution

After defining an Intent, you can launch it through the compile and run methods.

View Details
  • .compile(engine_factory: EngineFactory | None = None, max_iterations: int = 30, cache: bool = False, record: bool = True) -> Executor:

    • Purpose: "Compile" the Intent object into an executable Executor instance. This process generates the final Prompt and prepares the execution environment.
    • Parameters:
      • engine_factory: Currently only LLMEngineFactory is available.
      • max_iterations: Maximum iteration rounds (default 30) to prevent infinite loops.
      • cache: Whether to enable caching (default False), reusing code from Jupyter Notebook cache.
      • record: Whether to record execution process to Notebook (default True).
    • Usage: Use this method when you need more fine-grained control over the execution process.
  • .run() -> IntentResult:

    • Purpose: This is a convenience method that automatically compiles and immediately executes the Intent, then returns the final result.
    • Relationship: Calling intent.run() is essentially equivalent to intent.compile().run().
    • Key Feature: run() is an asynchronous method that waits for AI to complete all necessary code generation and execution steps, finally returning an IntentResult object containing the structured data produced by AI. Intent also provides a synchronous version run_sync().

Executor

Executor is the execution engine for Intent. It is responsible for transforming the constructed Intent object into a Prompt that the LLM can understand, then iteratively executing the Python code generated by the LLM in the Python runtime environment. The Executor continues iterating until the AI successfully produces a result that conforms to the OutputModel definition, or reaches the preset maximum number of iterations.

Runtime

Runtime is an embedded Python REPL (Read-Eval-Print Loop) environment that supports top-level await.

Core Capabilities
  • Code Execution: Executes Python code generated by the LLM.

  • Observation Feedback: Captures all print outputs during code execution and feeds them back to the LLM as "observation results", helping AI refine and iterate its subsequent code.

  • Exception Handling: Gracefully handles exceptions that may occur during code execution and feeds error information back to the LLM, enabling it to self-correct.

  • Object Sharing: Runtime shares the same process space with the host program, allowing AI-generated code to directly access and manipulate Python objects passed from the host program (such as objects defined in input and tools), achieving true embedded collaboration.

🗺️ Roadmap

Coming Soon:

  • 🎨 Intent visualization UI (Agent Pattern Graph → IntentLang conversion)

  • 🌐 More real-world examples

Long-term Vision:

  • 🧠 Native support in LLMs for code-level intent expression

  • 🏪 Intent marketplace for domain-specific patterns

🤝 About

IntentLang is created and maintained by 淚笑.

Connect with me:

For questions, suggestions, or collaboration:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

intentlang-0.2.1.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

intentlang-0.2.1-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file intentlang-0.2.1.tar.gz.

File metadata

  • Download URL: intentlang-0.2.1.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for intentlang-0.2.1.tar.gz
Algorithm Hash digest
SHA256 91b39b835649855a489c6b3d6b29ec4ed851c33d21159ae1be30ba410748301d
MD5 67252a1a77bd25f9f902bf1449431aae
BLAKE2b-256 d74858163d6ca503a8dc90e57c4ec232bf5c626e0df6ee483a51820cf3482101

See more details on using hashes here.

Provenance

The following attestation bundles were made for intentlang-0.2.1.tar.gz:

Publisher: python-publish.yml on l3yx/intentlang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file intentlang-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: intentlang-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for intentlang-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 77da4d8e9942fccb70715afd1d0c5b6df012fd731508e2f0a031e27baf5aaff1
MD5 e430061a23e3a8919d4c2fd4175e276c
BLAKE2b-256 870b33c550d53e68ecfaa65c71d1bc42884e0c18f4ab33bbf428df80622b1cc6

See more details on using hashes here.

Provenance

The following attestation bundles were made for intentlang-0.2.1-py3-none-any.whl:

Publisher: python-publish.yml on l3yx/intentlang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page