Skip to main content

llama-index llms groq integration

Project description

LlamaIndex Llms Integration: Groq

Welcome to Groq! 🚀 At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.

Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:

  • Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
  • Know the exact performance and compute time for any given workload 🔮
  • Take advantage of our cutting-edge technology to stay ahead of the competition 💪

Want more Groq? Check out our website for more resources and join our Discord community to connect with our developers!

Develop

To create a development environment, install poetry then run:

poetry install --with dev

Testing

To test the integration, first enter the poetry venv:

poetry shell

Then tests can be run with make

make test

Integration tests

Integration tests will be skipped unless an API key is provided. API keys can be created ath the groq console. Once created, store the API key in an environment variable and run tests

export GROQ_API_KEY=<your key here>
make test

Linting and Formatting

Linting and code formatting can be executed with make.

make format
make lint

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_groq-0.4.1.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_groq-0.4.1-py3-none-any.whl (3.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_groq-0.4.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_groq-0.4.1.tar.gz
Algorithm Hash digest
SHA256 841f04fbd0adb99c7591a0f9e6aaa23e3635a342fcfbf5e393b1dff2390f10d0
MD5 692653235eb7c51c61bd1e0a170f2c8c
BLAKE2b-256 b677f1c2c009fe7389bcc0b28019db742ae5964b5968c118ce5465c124a4dbd5

See more details on using hashes here.

File details

Details for the file llama_index_llms_groq-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_groq-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d376c9ecc59bcebc7ba2e82cdfa12c2c51b64690556f76bc11aab14769ae26c7
MD5 d1eb39bc6d223e624403b1f8862c98a0
BLAKE2b-256 5ad7d69f03e3d0d4f5599b58c64e8e8050d87369b928c3a7b2f9837007ea53b1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page