Skip to main content

Run MLX compatible HuggingFace models with Pydantic AI locally

Project description

pydantic-ai-mlx

MLX local inference for Pydantic AI through LM Studio or mlx-lm directly.


pydantic-ai-mlx PyPI download count

Run MLX compatible HuggingFace models on Apple silicon locally with Pydantic AI.

Two options are provided as backends;

  • LM Studio backend (OpenAI compatible server that can also utilize mlx-lm, model runs on a separate background process)
  • mlx-lm backend (direct integration with Apple's library, model runs within your applicaiton, experimental support)

STILL IN DEVELOPMENT, NOT RECOMMENDED FOR PRODUCTION USE YET.

Contributions are welcome!

Features

  • LM Studio backend, should be fully supported
  • Streaming text support for mlx-lm backend
  • Tool calling support for mlx-lm backend

Apple's MLX seems more performant on Apple silicon than llama.cpp (Ollama), as of January 2025.

Installation

uv add pydantic-ai-mlx

Usage

LM Studio backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_lm_studio import LMStudioModel

model = LMStudioModel(model_name="mlx-community/Qwen2.5-7B-Instruct-4bit") # supports tool calling
agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

mlx-lm backend

from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai_mlx_lm import MLXModel

model = MLXModel(model_name="mlx-community/Llama-3.2-3B-Instruct-4bit")
# See https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md#supported-models
# also https://huggingface.co/mlx-community

agent = Agent(model, system_prompt="You are a chatbot.")

async def stream_response(user_prompt: str, message_history: list[ModelMessage]):
    async with agent.run_stream(user_prompt, message_history) as result:
        async for message in result.stream():
            yield message

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydantic_ai_mlx-0.2.2.tar.gz (43.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydantic_ai_mlx-0.2.2-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file pydantic_ai_mlx-0.2.2.tar.gz.

File metadata

  • Download URL: pydantic_ai_mlx-0.2.2.tar.gz
  • Upload date:
  • Size: 43.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.24

File hashes

Hashes for pydantic_ai_mlx-0.2.2.tar.gz
Algorithm Hash digest
SHA256 8772c11d32bd6daf58e7edded10bd895dd81f8b95a32cca907cddd16209e9f23
MD5 dfa92df426758d578efa559bb21c0ff1
BLAKE2b-256 a344ffa7d62368783c57042dfb58614e9780d76973892263b98cd17871aaf6e7

See more details on using hashes here.

File details

Details for the file pydantic_ai_mlx-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for pydantic_ai_mlx-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 616197c2156f81121e2d8f7615dbe44d1a0127d702b68faff0a987a7b0201dd1
MD5 5d85887ef803228f7b3ccc42e507e851
BLAKE2b-256 c9310476143b64df0c18c8fe30eaf2f595f29dd79cdbcf11d4893810f875b13f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page