Skip to main content

Run MLX compatible HuggingFace models with Pydantic AI locally

Project description

pydantic-ai-mlx

MLX local inference for Pydantic AI through LM Studio or mlx-lm directly.


pydantic-ai-mlx PyPI download count

Run MLX compatible HuggingFace models on Apple silicon locally with Pydantic AI.

Two options are provided as backends;

  • LM Studio backend (OpenAI compatible server that can also utilize mlx-lm, model runs on a separate background process)
  • mlx-lm backend (direct integration with Apple's library, model runs within your applicaiton, experimental support)

STILL IN DEVELOPMENT, NOT RECOMMENDED FOR PRODUCTION USE YET.

Contributions are welcome!

Features

  • LM Studio backend, should be fully supported
  • Streaming text support for mlx-lm backend
  • Tool calling support for mlx-lm backend

Apple's MLX seems more performant on Apple silicon than llama.cpp (Ollama), as of January 2025.

Installation

uv add pydantic-ai-mlx

Usage

LM Studio backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_lm_studio import LMStudioModel

model = LMStudioModel(model_name="mlx-community/Qwen2.5-7B-Instruct-4bit") # supports tool calling
agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

mlx-lm backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_mlx_lm import MLXModel

model = MLXModel(model_name="mlx-community/Llama-3.2-3B-Instruct-4bit")
# See https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md#supported-models
# also https://huggingface.co/mlx-community

agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydantic_ai_mlx-0.2.3.tar.gz (47.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydantic_ai_mlx-0.2.3-py3-none-any.whl (9.3 kB view details)

Uploaded Python 3

File details

Details for the file pydantic_ai_mlx-0.2.3.tar.gz.

File metadata

  • Download URL: pydantic_ai_mlx-0.2.3.tar.gz
  • Upload date:
  • Size: 47.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.5

File hashes

Hashes for pydantic_ai_mlx-0.2.3.tar.gz
Algorithm Hash digest
SHA256 eeec51fcc71f38fbdb57e560c208c287a559f7edbf233e20f2c63c9c08c82f26
MD5 75e7514f774e031a5482a71e0a17d011
BLAKE2b-256 557a0ed51a6faca6bbe392b6e81fbcd69bb2f577c31c103ea7719bcf5661bfb2

See more details on using hashes here.

File details

Details for the file pydantic_ai_mlx-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for pydantic_ai_mlx-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5efd11f17f6f91c66b3af55565ae9de019cbab40533ff45c332ab058ef270f32
MD5 2a577ab6ffc91ea3ec3defeed3ce29c0
BLAKE2b-256 52fdae0b093a767c3d7fc6387f430c4c34aec7ce9d492e2f594630cbfe6bea13

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page