Skip to main content

CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models

Project description

CandyLLM 🍬

A simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.

This is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems.

If you use this library, please cite Shreyan Mitra.

With all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.

Install package

pip install CandyLLM

Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False)
myLLM.answer("What is the capital of Uzbekistan?") #Returns Tashkent

This behavior is due to the fact that the default model is Llama3-8b

Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "Llama2-7b") #or myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "meta-llama/Llama-2-7b-chat-hf", modelNameType="path")
myLLM.answer("What is the capital of Funlandia?", task="QAWithRAG", "The capital of Funlandia is Funtown", temperature=0.6) #Returns Funtown

Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt

from CandyLLM import*
myLLM = LLMWrapper("MY_OPENAI_TOKEN", testing=False, source="OpenAI", modelName = "gpt-4-turbo", modelNameType="path")
myLLM.answer("Write a creative essay about sustainability", task="Open-ended", presence_penalty=0.5)

Log out of HuggingFace and OpenAI and remove my API keys from the environment

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.answer(...) #Do something with the LLM
myLLM.logout()

Check for malicious input prompts

LLMWrapper.promptSafetyCheck("Is 1010 John Doe's social security number?") #Returns false to indicate unsafe prompt

Change Config

Want to use a different model? No need to create another wrapper.

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.setConfig("MY_TOKEN", testing = False, source="HuggingFace", modelName = "Mistral", modelNameType = "alias") #Tada: a changed LLM wrapper

Dummy LLM

Sometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called "Useless" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? 😃)

CandyUI

CandyUI is the user interface of CandyLLM. It provides a chatbot, a dropdown for choosing the LLM to use, parameter configs for the LLM, and the option to apply post-hoc and pre-hoc methods to the user prompt and LLM output. CandyUI can be integrated into and communicate with a larger UI with custom functions, or you can use the selfOutput option for the custom post-hoc metrics to be displayed within CandyUI itself.

For example, running

def postprocess(message, response):
    #Sample postprocessor_fn which just returns the difference in length between LLM response and user prompt
    return len(response) - len(message)
x = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = "Length Difference")

deploys the following webpage:

Screen Shot 2024-07-17 at 11 53 53 AM

You can also change how the output is shown. For example, for explainability purposes, you might want to set selfOutputType = "HighlightedText":

def postprocess(message, response):
    #Randomly assigns importance scores to words in the user prompt
    importantWords = []
    for word in message.split():
        importantWords.append((word, "important")) if len(word) > 3 else importantWords.append((word, "unimportant"))
    return importantWords
x = LLMWrapper.getUI(postprocessor_fn = postprocess, selfOutput = True, selfOutputLabel = "Important Words", selfOutputType = "HighlightedText")

The UI now looks like this: Screen Shot 2024-07-17 at 11 50 43 AM

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

candyllm-0.0.6.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

CandyLLM-0.0.6-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file candyllm-0.0.6.tar.gz.

File metadata

  • Download URL: candyllm-0.0.6.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for candyllm-0.0.6.tar.gz
Algorithm Hash digest
SHA256 5ea6d401f297b63a4ff1794d52ccb2e5f345326b1964db82ab54527f2b2a3501
MD5 0d3f56e6eb142052b2221c7c856963a8
BLAKE2b-256 326311344cc33a8fbc83f131805c8c7d4cba3bf34bd5099e3eeb527847402e65

See more details on using hashes here.

File details

Details for the file CandyLLM-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: CandyLLM-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for CandyLLM-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 95db96db0d90aba5fc5690db4b7fc79c47239501257c9b474509e5557e2b5e9b
MD5 2df245603874b928dbd023b9d39f8bbb
BLAKE2b-256 1de8e55a3e332eb36de1a614c777ae57f038eb36913924ba7ab6314b944f4969

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page