Skip to main content

llm plugin to prompt Cerebras hosted models.

Project description

PyPI Changelog License

llm plugin to prompt Cerebras hosted models.

Install this plugin in the same environment as LLM:

llm install llm-cerebras

You'll need to obtain a Cerebras API key. Once you have it, configure the plugin like this:

llm keys set cerebras

To use the Cerebras models, run:

llm -m llama3.1-8b "Your prompt here"

Or for the 70B model:

llm -m llama3.1-70b "Your prompt here"

The following options are available:

  • temperature: Controls randomness. Defaults to 0.7, range 0-1.5.
  • max_tokens: The maximum number of tokens to generate.
  • top_p: Alternative to temperature for nucleus sampling. Defaults to 1.
  • seed: For deterministic sampling.

Example usage with options:

llm -m llama3.1-8b "Your prompt" -o temperature 0.5 -o max_tokens 100

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-cerebras
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_cerebras-0.1.1.tar.gz (3.9 kB view hashes)

Uploaded Source

Built Distribution

llm_cerebras-0.1.1-py3-none-any.whl (4.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page