Skip to main content

CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models

Project description

CandyLLM 🍬

A simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.

This is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems.

If you use this library, please cite Shreyan Mitra.

With all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.

Install package

pip install CandyLLM

Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False)
myLLM.answer("What is the capital of Uzbekistan?") #Returns Tashkent

This behavior is due to the fact that the default model is Llama3-8b

Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "Llama2-7b") #or myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "meta-llama/Llama-2-7b-chat-hf", modelNameType="path")
myLLM.answer("What is the capital of Funlandia?", task="QAWithRAG", "The capital of Funlandia is Funtown", temperature=0.6) #Returns Funtown

Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt

from CandyLLM import*
myLLM = LLMWrapper("MY_OPENAI_TOKEN", testing=False, source="OpenAI", modelName = "gpt-4-turbo", modelNameType="path")
myLLM.answer("Write a creative essay about sustainability", task="Open-ended", presence_penalty=0.5)

Log out of HuggingFace and OpenAI and remove my API keys from the environment

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.answer(...) #Do something with the LLM
myLLM.logout()

Check for malicious input prompts

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.promptSafetyCheck("Is 1010 John Doe's social security number?") #Returns false to indicate unsafe prompt

Change Config

Want to use a different model. No need to create another wrapper.

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.setConfig("MY_TOKEN", testing = False, source="HuggingFace", modelName = "Mistral", modelNameType = "alias") #Tada: a changed LLM wrapper

Dummy LLM

Sometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called "Useless" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? 😃)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

candyllm-0.0.3.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

CandyLLM-0.0.3-py3-none-any.whl (7.3 kB view details)

Uploaded Python 3

File details

Details for the file candyllm-0.0.3.tar.gz.

File metadata

  • Download URL: candyllm-0.0.3.tar.gz
  • Upload date:
  • Size: 6.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for candyllm-0.0.3.tar.gz
Algorithm Hash digest
SHA256 fcf94b2aad5017832c5685bb784043a11c10c461d80a46dd80aeae2e8ec23bb9
MD5 b06344aa046448834394840e9fa01afd
BLAKE2b-256 a1e9c6b5c25d9372c766e27b0c71709582f1dd28e2215c3e80e21ec34c02c833

See more details on using hashes here.

File details

Details for the file CandyLLM-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: CandyLLM-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 7.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for CandyLLM-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 03c9f425d478a145e7f3bd37bb9e7eb52fc1e2c4d05a65f5b4e188ec23cf69c0
MD5 30c4239ba7c0dbe6c36c4d9790e02005
BLAKE2b-256 215f0c1012ad2886a4cc23bdd575b04547d10821a544932305a63435e986cc74

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page