Skip to main content

Large Language Model Develop Toolkit

Project description

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ipex_llm-2.1.0b20240103-py3-none-win_amd64.whl (4.0 MB view details)

Uploaded Python 3 Windows x86-64

File details

Details for the file ipex_llm-2.1.0b20240103-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for ipex_llm-2.1.0b20240103-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 b9c4fdf6adeeb6b53244c07174678034a33c2242f0cf6fb453352bd430a395e7
MD5 439a5e538ed60989162057862bee481d
BLAKE2b-256 f5235bed0a36d438ccaaadce251ef198fa24244ed7eeee86790179a3a9691165

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page