Skip to main content

llama-index llms groq integration

Project description

LlamaIndex Llms Integration: Groq

Welcome to Groq! 🚀 At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.

Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:

  • Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
  • Know the exact performance and compute time for any given workload 🔮
  • Take advantage of our cutting-edge technology to stay ahead of the competition 💪

Want more Groq? Check out our website for more resources and join our Discord community to connect with our developers!

Develop

To create a development environment, install poetry then run:

poetry install --with dev

Testing

To test the integration, first enter the poetry venv:

poetry shell

Then tests can be run with make

make test

Integration tests

Integration tests will be skipped unless an API key is provided. API keys can be created ath the groq console. Once created, store the API key in an environment variable and run tests

export GROQ_API_KEY=<your key here>
make test

Linting and Formatting

Linting and code formatting can be executed with make.

make format
make lint

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_groq-0.3.2.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_groq-0.3.2-py3-none-any.whl (3.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_groq-0.3.2.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_groq-0.3.2.tar.gz
Algorithm Hash digest
SHA256 1f00da4984e1616f4f5970418b040e7d48a7fdce92c4dfcd75eed131cc270a52
MD5 709f2b94c2ab3cd7b1cc9119a5e1a1ef
BLAKE2b-256 7b95291e9b2c8a24797146cf84565fd95228382c88e143dd36c7206461f4b83c

See more details on using hashes here.

File details

Details for the file llama_index_llms_groq-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_groq-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e112293a21f2ea891cd80074059b336f251d45de4f2ec49f0f9784579691ea69
MD5 2872ea713c4e33806850aa4f5ad23951
BLAKE2b-256 6770110b5098829737abea1a242e525b05bdbeaa448f6f5b46b08b760c6b22d4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page