Skip to main content

llama-index embeddings ipex-llm integration

Project description

LlamaIndex Embeddings Integration: Ipex_Llm

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency. This module allows loading Embedding models with ipex-llm optimizations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_embeddings_ipex_llm-0.3.0.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_embeddings_ipex_llm-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_embeddings_ipex_llm-0.3.0.tar.gz
Algorithm Hash digest
SHA256 3df6750fdd8a042ca1279676061178935c589da3aa04f6cbd9ede6f8cff5567c
MD5 7ca2aff1aea14954fc0ef30f5817531c
BLAKE2b-256 5ab1ec6e213e5534579375e859d26767128d87f89a722f5429a0eb6f1d8fc806

See more details on using hashes here.

File details

Details for the file llama_index_embeddings_ipex_llm-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_embeddings_ipex_llm-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6de54bf4bc0a750c9fa40077ea317316c359ee3c8bd461dd8714b1905ade36bd
MD5 f20cff446e45d9306b989d1a522ae915
BLAKE2b-256 c2251dd2a819fb929486c222d05b961e44f4f4ab47667eea7c241ce6414e4e61

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page