Skip to main content

Run MLX compatible HuggingFace models with Pydantic AI locally

Project description

pydantic-ai-mlx

MLX local inference for Pydantic AI through LM Studio or mlx-lm directly.


pydantic-ai-mlx PyPI download count

Run MLX compatible HuggingFace models on Apple silicon locally with Pydantic AI.

Two options are provided as backends;

  • LM Studio backend (OpenAI compatible server that can also utilize mlx-lm, model runs on a seperate process)
  • mlx-lm backend (direct integration with Apple's library, model runs within your Python process, experimental)

STILL IN DEVELOPMENT, NOT RECOMMENDED FOR PRODUCTION USE YET.

Contributions are welcome!

Features

  • LM Studio backend, should be fully supported
  • Streaming text support for mlx-lm backend
  • Tool calling support for mlx-lm backend

Apple's MLX seems more performant on Apple silicon than llama.cpp (Ollama), as of January 2025.

Installation

uv add pydantic-ai-mlx

Usage

LM Studio backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_lm_studio import LMStudioModel

model = LMStudioModel(model_name="mlx-community/Qwen2.5-7B-Instruct-4bit") # supports tool calling
agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

mlx-lm backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_mlx_lm import MLXModel

model = MLXModel(model_name="mlx-community/Llama-3.2-3B-Instruct-4bit")
# See https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md#supported-models
# also https://huggingface.co/mlx-community

agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydantic_ai_mlx-0.2.1.tar.gz (43.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydantic_ai_mlx-0.2.1-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file pydantic_ai_mlx-0.2.1.tar.gz.

File metadata

  • Download URL: pydantic_ai_mlx-0.2.1.tar.gz
  • Upload date:
  • Size: 43.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.24

File hashes

Hashes for pydantic_ai_mlx-0.2.1.tar.gz
Algorithm Hash digest
SHA256 8b3e666cc914fa8a925ce72958e60599575fad1533029fba06f861fa7a930ab7
MD5 d8729947894f2bc45a78a5c9dc03d097
BLAKE2b-256 84243cd3d9988bbe919dc4d84a0aba835b69099843d8f09b1abc84b132f17a3a

See more details on using hashes here.

File details

Details for the file pydantic_ai_mlx-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pydantic_ai_mlx-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3cb13df1812e93bf40ec91c7c8bc0b87b61e49eed33ced9949941f4527cbaacb
MD5 d541901eae99c71cdfd496478c636edd
BLAKE2b-256 57afa2df30a8b32c9e2a937db6155f65a563ce66c3fc6fb535c3b103734d8f57

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page