Skip to main content

Large Language Model Develop Toolkit

Project description

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ipex_llm-2.1.0b20240102-py3-none-win_amd64.whl (4.0 MB view details)

Uploaded Python 3 Windows x86-64

File details

Details for the file ipex_llm-2.1.0b20240102-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for ipex_llm-2.1.0b20240102-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 6cb8b8cfd60a7e5db5cf3f48a7e1d23d9b3cebf9ba2eed8de4c33e62354c9956
MD5 4ac25dd87a31e6bc6d701cff036b5a68
BLAKE2b-256 734b08caacafb3ad60ee24b25b839fecef0a926cc8ee6b32839c2368f3000390

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page