llm plugin to prompt Cerebras hosted models.
Project description
llm plugin to prompt Cerebras hosted models.
Install this plugin in the same environment as LLM:
llm install llm-cerebras
You'll need to obtain a Cerebras API key. Once you have it, configure the plugin like this:
llm keys set cerebras
To use the Cerebras models, run:
llm -m llama3.1-8b "Your prompt here"
Or for the 70B model:
llm -m llama3.1-70b "Your prompt here"
The following options are available:
temperature
: Controls randomness. Defaults to 0.7, range 0-1.5.max_tokens
: The maximum number of tokens to generate.top_p
: Alternative to temperature for nucleus sampling. Defaults to 1.seed
: For deterministic sampling.
Example usage with options:
llm -m llama3.1-8b "Your prompt" -o temperature 0.5 -o max_tokens 100
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-cerebras
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_cerebras-0.1.1.tar.gz
(3.9 kB
view hashes)
Built Distribution
Close
Hashes for llm_cerebras-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | af8b6d9c441e6c80cdc159f7642ce63821194010b1dd1a2411a567910c669720 |
|
MD5 | 4401a40130a27051b4c27553ba6e45a6 |
|
BLAKE2b-256 | 069543f5153fbbdf500a19bbfa7b3e017a25154deb3e643e584d0c5e61b8577b |