llm plugin to prompt Cerebras hosted models.
Project description
llm plugin to prompt Cerebras hosted models.
Install this plugin in the same environment as LLM:
llm install llm-cerebras
You'll need to obtain a Cerebras API key. Once you have it, configure the plugin like this:
llm keys set cerebras
To use the Cerebras models, run:
llm -m llama3.1-8b "Your prompt here"
Or for the 70B model:
llm -m llama3.1-70b "Your prompt here"
The following options are available:
temperature
: Controls randomness. Defaults to 0.7, range 0-1.5.max_tokens
: The maximum number of tokens to generate.top_p
: Alternative to temperature for nucleus sampling. Defaults to 1.seed
: For deterministic sampling.
Example usage with options:
llm -m llama3.1-8b "Your prompt" -o temperature 0.5 -o max_tokens 100
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-cerebras
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llm_cerebras-0.1.0.tar.gz
(3.8 kB
view hashes)
Built Distribution
Close
Hashes for llm_cerebras-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b7f6b2327080d83c334342aaab80679abc3fc8baa87889858f0160844a8d789 |
|
MD5 | 65863530483ec1e0fafcd33830d780de |
|
BLAKE2b-256 | 875178b57537a632d8f033eca48b7354a340da0ff7389551cabcddc13d6e0633 |