Skip to main content

llama-index llms groq integration

Project description

LlamaIndex Llms Integration: Groq

Welcome to Groq! 🚀 At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.

Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:

  • Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
  • Know the exact performance and compute time for any given workload 🔮
  • Take advantage of our cutting-edge technology to stay ahead of the competition 💪

Want more Groq? Check out our website for more resources and join our Discord community to connect with our developers!

Develop

To create a development environment, install poetry then run:

poetry install --with dev

Testing

To test the integration, first enter the poetry venv:

poetry shell

Then tests can be run with make

make test

Integration tests

Integration tests will be skipped unless an API key is provided. API keys can be created ath the groq console. Once created, store the API key in an environment variable and run tests

export GROQ_API_KEY=<your key here>
make test

Linting and Formatting

Linting and code formatting can be executed with make.

make format
make lint

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_groq-0.3.0.tar.gz (2.7 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_groq-0.3.0-py3-none-any.whl (2.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_groq-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_groq-0.3.0.tar.gz
  • Upload date:
  • Size: 2.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_llms_groq-0.3.0.tar.gz
Algorithm Hash digest
SHA256 f3ce9511783fa9fc75e00e048d8590901b4801ab18d277d3b96eab84cec1ca64
MD5 723078f9c49100a57a9e004893de6afc
BLAKE2b-256 281e3ef527582f9658afb055cfbab10d3119462f8bde9cf8fc2293fe185f63ca

See more details on using hashes here.

File details

Details for the file llama_index_llms_groq-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_groq-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 47799133e35b8671ca0a70e8c6af39713f28c39b99dd5687e1c3bb847d263357
MD5 c569f435bb3fda14c8b7e1cbeefc8c53
BLAKE2b-256 62a48b6647c405fba7525c3f1aeb0b430a5f67d963032ac341ef14ca23f63686

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page