Skip to main content

llama-index llms ipex-llm integration

Project description

LlamaIndex Llms Integration: IPEX-LLM

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency. This module enables the use of LLMs optimized with ipex-llm in LlamaIndex pipelines.

Installation

On CPU

pip install llama-index-llms-ipex-llm

On GPU

pip install llama-index-llms-ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Usage

from llama_index.llms.ipex_llm import IpexLLM

Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ipex_llm-0.1.3.tar.gz (6.6 kB view hashes)

Uploaded Source

Built Distribution

llama_index_llms_ipex_llm-0.1.3-py3-none-any.whl (6.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page