Skip to main content

LangExtract provider plugin for llama-cpp-python

Project description

LangExtract llama-cpp-python Provider

A provider plugin for LangExtract that supports llama-cpp-python models.

Installation

pip install langextract-llamacpp

Supported Model IDs

Model ID using the format as such:

  1. HuggingFace repo with file name: hf:<hf_repo_id>:<filename>
  2. HuggingFace repo without file name: hf:<hf_repo_id>, in this case the filename will be None
  3. Local file: file:<path_to_model>

hf_repo_id is existing huggingface model repository.

Usage

Using HuggingFace repository; this will call Llama.from_pretrained(...).

import langextract as lx

config = lx.factory.ModelConfig(
    model_id="hf:MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF:*Q4_K_M.gguf",
    provider="LlamaCppLanguageModel", # optional as hf: will resolve to the model
    provider_kwargs=dict(
        n_gpu_layers=-1,
        n_ctx=4096,
        verbose=False,
        completion_kwargs=dict(
            temperature=1.1,
            seed=42,
        ),
    ),
)

model = lx.factory.create_model(config)

result = lx.extract(
    model=model,
    text_or_documents="Your input text",
    prompt_description="Extract entities",
    examples=[...],
)

Using local file path; this will call Llama(...).

import langextract as lx

config = lx.factory.ModelConfig(
    model_id="file:Mistral-7B-Instruct-v0.3.Q4_K_M.gguf",
    provider="LlamaCppLanguageModel", # optional as file: will resolve to the model
    provider_kwargs=dict(
        ...
    ),
)

...

For provider_kwargs refer to documentation for Llama class.

For completion_kwargs refer to documentation for crate_chat_completion method.

OpenAI compatible Web Server

When using llama-cpp-python server (or llama.cpp), you can use OpenAILanguageModel in the provider field as they implement OpenAI compatible web server.

To set this up, choose OpenAILanguageModel as the provider and supply the server’s base URL and an API key (any value) in provider_kwargs. The model_id field is optional.

config = lx.factory.ModelConfig(
    model_id="local", # optional
    provider="OpenAILanguageModel", # explicitly set the provider to `OpenAILanguageModel`
    provider_kwargs=dict(
        base_url="http://localhost:8000/v1/",
        api_key="llama-cpp", # any value; mandatory
    ),
)

model = lx.factory.create_model(config)

result = lx.extract(
    model=model,
    ...
)

Development

  1. Install in development mode: uv pip install -e .
  2. Run tests: uv run test_plugin.py
  3. Build package: uv build
  4. Publish to PyPI: uv publish

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langextract_llamacpp-0.1.1.tar.gz (3.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langextract_llamacpp-0.1.1-py3-none-any.whl (4.6 kB view details)

Uploaded Python 3

File details

Details for the file langextract_llamacpp-0.1.1.tar.gz.

File metadata

File hashes

Hashes for langextract_llamacpp-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7050a7efec7e70a62ce9c1ce4ddcbed006589e3779ff57f04a7fdd607909fff2
MD5 0d522d17ae18ac2abdc0f57975188a82
BLAKE2b-256 6ff0ee654b32f980947244b38d78dd66019684ec7f5ba72e68eb2da119aa5404

See more details on using hashes here.

File details

Details for the file langextract_llamacpp-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langextract_llamacpp-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fb1fca978b9298ca02675e5395136689e68402bfe96e670879c2ba4b4404d038
MD5 31af1e005c50ecd24895e36f4ff1207b
BLAKE2b-256 b72f06d8fb13cceeb0351c6f0226f630baeeb5befdd3515f372a79d8fcda4804

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page