Skip to main content

CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models

Project description

CandyLLM 🍬

A simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.

This is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems.

If you use this library, please cite Shreyan Mitra.

With all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.

Install package

pip install CandyLLM

Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False)
myLLM.answer("What is the capital of Uzbekistan?") #Returns Tashkent

This behavior is due to the fact that the default model is Llama3-8b

Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "Llama2-7b") #or myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "meta-llama/Llama-2-7b-chat-hf", modelNameType="path")
myLLM.answer("What is the capital of Funlandia?", task="QAWithRAG", "The capital of Funlandia is Funtown", temperature=0.6) #Returns Funtown

Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt

from CandyLLM import*
myLLM = LLMWrapper("MY_OPENAI_TOKEN", testing=False, source="OpenAI", modelName = "gpt-4-turbo", modelNameType="path")
myLLM.answer("Write a creative essay about sustainability", task="Open-ended", presence_penalty=0.5)

Log out of HuggingFace and OpenAI and remove my API keys from the environment

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.answer(...) #Do something with the LLM
myLLM.logout()

Check for malicious input prompts

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.promptSafetyCheck("Is 1010 John Doe's social security number?") #Returns false to indicate unsafe prompt

Change Config

Want to use a different model? No need to create another wrapper.

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.setConfig("MY_TOKEN", testing = False, source="HuggingFace", modelName = "Mistral", modelNameType = "alias") #Tada: a changed LLM wrapper

Dummy LLM

Sometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called "Useless" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? 😃)

CandyUI

CandyUI is the user interface of CandyLLM. It provides a chatbot, a dropdown for choosing the LLM to use, parameter configs for the LLM, and the option to apply post-hoc and pre-hoc methods to the user prompt and LLM output. CandyUI can be integrated into and communicate with a larger UI with custom functions, or you can use the selfOutput option for the custom post-hoc metrics to be displayed within CandyUI itself.

For example, running

def postprocess(message, response):
    #Sample postprocessor_fn which just returns the difference in length between LLM response and user prompt
    return len(response) - len(message)
x = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True)

deploys the following webpage: CandyUI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

candyllm-0.0.5.tar.gz (9.2 kB view details)

Uploaded Source

Built Distribution

CandyLLM-0.0.5-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file candyllm-0.0.5.tar.gz.

File metadata

  • Download URL: candyllm-0.0.5.tar.gz
  • Upload date:
  • Size: 9.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for candyllm-0.0.5.tar.gz
Algorithm Hash digest
SHA256 f94b174f1c7b8b93b11e38529340c00b137c7f7d8b36eefac23846ce4f975f2e
MD5 f503951f2406d70d4c4ba5a3a1d716ec
BLAKE2b-256 e39ff9af8c44b3ec7ad55c72962b47b230cbf81394deaecc8764513c76c541ec

See more details on using hashes here.

File details

Details for the file CandyLLM-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: CandyLLM-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 9.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for CandyLLM-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c9ecf754263f510c4daa08b9c6138d89cab2ec3119a2657afdd4d59a13649259
MD5 995666bf64e36c8f4b7338dfe2e8b5bf
BLAKE2b-256 f399206cb49729e9825125613de0df0f4c20aed02787021c3043b34c99fb9410

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page