tested prompts for common use-cases using open LLMs with ollama and Langchain
Project description
yuseful_prompts
pre requisites
ollama
installed withllama3
downloaded
test
python3 -m pytest -v ./yuseful_prompts/test_useful_prompts.py
snapshot of LLMs performance during tests
Here are the results on running the tests on a Intel® Xeon® Gold 5412U server with 256 GB DDR5 ECC and no GPU.
financial headlines sentiment extraction
Model | Status | Time (s) |
---|---|---|
llama3 | OK | 17.68 |
phi3 | OK | 17.84 |
aya | OK | 21.68 |
mistral | OK | 21.76 |
mistral-openorca | OK | 22.20 |
gemma2 | OK | 23.14 |
phi3:medium-128k | OK | 45.87 |
phi3:14b | OK | 47.36 |
aya:35b | OK | 77.99 |
llama3:70b | OK | 144.62 |
qwen2:72b | OK | 148.25 |
command-r-plus | OK | 239.20 |
qwen2 | OKKO | 16.11 |
I've set qwen2
to OKKO
as it systemtically considers that Hedge funds cut stakes in Magnificent Seven to invest in broader AI boom
is a very bullish
, I didn't discard the model entirely since this is open to interpretation...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
yuseful_prompts-1.1.1.tar.gz
(4.9 kB
view hashes)
Built Distribution
Close
Hashes for yuseful_prompts-1.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 548930dcacac73cd44066cf7bf8614f1e9321367e982c0a4469e4e67f19053b3 |
|
MD5 | 7d1a54d45830b3cd8ed95cc1b3f7f4be |
|
BLAKE2b-256 | 843cc7e36966c2779e5a9b507a88eae792139c6619a81bd373fbe8ea9c076b3c |