Skip to main content

CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models

Project description

CandyLLM 🍬

A simple, easy-to-use framework for HuggingFace and OpenAI text-generation models. The goal is to eventually integrate other sources such as custom large language models (LLMs) as well to create a coherent UI.

This is a work-in-progress, so pull-requests and issues are welcome! We try to keep it as stable as possible though, so people installing this library do not have any problems.

If you use this library, please cite Shreyan Mitra.

With all the administrivia out of the way, here are some examples of how to use the library. We are still setting up the official documentation. The following examples show some use cases, or tasks, and how an user of llm-wrapper would invoke the model of their choice.

Install package

pip install CandyLLM

Task: Fetch Llama3-8b and run it with default parameters on a simple QA Prompt without retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False)
myLLM.answer("What is the capital of Uzbekistan?") #Returns Tashkent

This behavior is due to the fact that the default model is Llama3-8b

Task: Fetch Llama2-7b and run it with tempereature = 0.6 on an QA Prompt with retrieval augmented generation

from CandyLLM import*
myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "Llama2-7b") #or myLLM = LLMWrapper("MY_HF_TOKEN", testing=False, modelName = "meta-llama/Llama-2-7b-chat-hf", modelNameType="path")
myLLM.answer("What is the capital of Funlandia?", task="QAWithRAG", "The capital of Funlandia is Funtown", temperature=0.6) #Returns Funtown

Task: Fetch GPT-4 and run it with presence_penalty = 0.5 on an Open-Ended Prompt

from CandyLLM import*
myLLM = LLMWrapper("MY_OPENAI_TOKEN", testing=False, source="OpenAI", modelName = "gpt-4-turbo", modelNameType="path")
myLLM.answer("Write a creative essay about sustainability", task="Open-ended", presence_penalty=0.5)

Log out of HuggingFace and OpenAI and remove my API keys from the environment

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.answer(...) #Do something with the LLM
myLLM.logout()

Check for malicious input prompts

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.promptSafetyCheck("Is 1010 John Doe's social security number?") #Returns false to indicate unsafe prompt

Change Config

Want to use a different model. No need to create another wrapper.

myLLM = LLMWrapper(...) #Create some LLM wrapper
myLLM.setConfig("MY_TOKEN", testing = False, source="HuggingFace", modelName = "Mistral", modelNameType = "alias") #Tada: a changed LLM wrapper

Dummy LLM

Sometimes, you don't want to spend the time and money to make api calls to an actual LLM, especially if you are testing an UI or an integration of a chat service. Dummy LLMs to the rescue! Our dummy LLM is called "Useless" and it will return answers immediately with very little computation spent (granted, the results it gives are useless - but, hey, what did you expect? 😃)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

candyllm-0.0.1.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

CandyLLM-0.0.1-py3-none-any.whl (7.3 kB view details)

Uploaded Python 3

File details

Details for the file candyllm-0.0.1.tar.gz.

File metadata

  • Download URL: candyllm-0.0.1.tar.gz
  • Upload date:
  • Size: 6.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for candyllm-0.0.1.tar.gz
Algorithm Hash digest
SHA256 209b7a62fae7cae64d22a438e31204c380423d1d574c64a7d5359394bda933ef
MD5 3de0a7a541a6d25757ef9afb90125a68
BLAKE2b-256 b1717e2f5147c8e092b01da03b33d06510293fed95e852babf1ca3516904e2d2

See more details on using hashes here.

File details

Details for the file CandyLLM-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: CandyLLM-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 7.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for CandyLLM-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 15b29e5f02ba018e0e5f72e8573cc1286608de85ba91306e94d8aaadfa028612
MD5 b8cc6a75bc9558fadedfadc589a1436c
BLAKE2b-256 fe9fc3f7d0474ce84c0f754cf865006434e93c0c8b0ecafcdef18aa1a4f35fc5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page