Skip to main content

Python package to create an AI clone of yourself using LLMs.

Project description

Logo

CloneLLM

Create an AI clone of yourself using LLMs.

Latest Release PyPI Version Python Versions PyPI License

Introduction

A minimal Python package that enables you to create an AI clone of yourself using LLMs. Built on top of LiteLLM and LangChain, CloneLLM utilizes the Retrieval-Augmented Generation (RAG) to tailor AI responses as if you are answering the questions.

You can input texts and documents about yourself — including personal information, professional experience, educational background, etc. — which are then embedded into a vector space for dynamic retrieval. This AI clone can act as a virtual assistant or digital representation, capable of handling queries and tasks in a manner that reflects the your own knowledge, tone, style and mannerisms.

Installation

Before installing CloneLLM, make sure you have Python 3.10 or newer installed on your machine.

PyPi

pip install clonellm

Poetry

poetry add clonellm

GitHub

# Clone the repository
git clone https://github.com/msamsami/clonellm.git

# Navigate into the project directory
cd clonellm

# Install the package
pip install .

Usage

Getting started

You can set up a clone of yourself using CloneLLM in just a few lines of code.

Step 1. Gather documents that contain relavant information about you. These documents form the base from which your AI clone will learn to mimic your tone, style, and expertise.

from langchain_core.documents import Document

documents = [
    Document(page_content="My name is Mehdi Samsami."),
    open("about_me.txt", "r").read(),
]

Step 2. Initialize a clone with your documents and your preferred LLM.

from clonellm import CloneLLM

clone = CloneLLM(model="gpt-4o", documents=documents)

Step 3. Configure environment variables to store API keys for LLM model.

export OPENAI_API_KEY=sk-...

Step 4. Fit the clone to the data (documents).

clone.fit()

Step 5. Invoke the clone to ask questions.

clone.invoke("What's your name?")

# Response: My name is Mehdi Samsami. How can I help you?

Models

At its core, CloneLLM utilizes LiteLLM for interactions with various LLMs. This is why you can choose from 100+ LLMs from many different providers, including Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, etc.

Document loaders

You can use LangChain's document loaders to seamlessly import data from various sources into Document format. Take, for example, text and HTML loaders:

# !pip install unstructured
from langchain_community.document_loaders import TextLoader, UnstructuredHTMLLoader

documents = TextLoader("cv.txt").load() + UnstructuredHTMLLoader("linkedin.html").load()

Or JSON loader:

# !pip install jq
from langchain_community.document_loaders import JSONLoader

documents = JSONLoader(
    file_path='chat.json',
    jq_schema='.messages[].content',
    text_content=False
).load()

RAG

In the basic usage described above, documents are summarized to create a static context for interacting with the LLM. This is the default behavior where the embedding and vector_store parameters are not specified. For a more advanced usage, you can specify an embedding model and a vector store to implement a RAG-based question-answering system. In this scenario, the documents are embedded and stored in the vector store, allowing them to serve as a dynamic retrieval context for each prompt.

Embeddings

With LiteLLMEmbeddings, CloneLLM allows you to utilize embedding models from a variety of providers supported by LiteLLM:

from clonellm import CloneLLM, LiteLLMEmbeddings
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

embedding = LiteLLMEmbeddings(model="text-embedding-3-small", dimensions=1024)
clone = CloneLLM(model="gpt-4o-mini", documents=documents, embedding=embedding)

Additionally, you can select any preferred embedding model from LangChain's extensive range. Take, for example, the Hugging Face embedding:

# !pip install --upgrade --quiet sentence_transformers
from langchain_community.embeddings import HuggingFaceEmbeddings
from clonellm import CloneLLM
import os

os.environ["COHERE_API_KEY"] = "cohere-api-key"

embedding = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
clone = CloneLLM(model="command-xlarge-beta", documents=documents, embedding=embedding)

Or, the Llama-cpp embedding:

# !pip install --upgrade --quiet llama-cpp-python
from langchain_community.embeddings import LlamaCppEmbeddings
from clonellm import CloneLLM
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

embedding = LlamaCppEmbeddings(model_path="ggml-model-q4_0.bin")
clone = CloneLLM(model="gpt-4o-mini", documents=documents, embedding=embedding)

Vector store

Currently, CloneLLM supports the following vector stores:

When an embedding model is specified (via the embedding parameter), the dynamic context retrieval is enabled and the selected vector store will be initialized and used to store the document embeddings.

# !pip install clonellm[faiss]
from clonellm import CloneLLM, LiteLLMEmbeddings, RagVectorStore
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

embedding = LiteLLMEmbeddings(model="text-embedding-3-small")
clone = CloneLLM(model="gpt-4o", documents=documents, embedding=embedding, vector_store=RagVectorStore.FAISS)

User profile

Create a personalized profile using CloneLLM's UserProfile, which allows you to feed detailed personal information into your clone for more customized interactions:

from clonellm import UserProfile

profile = UserProfile(
    first_name="Mehdi",
    last_name="Samsami",
    city="Shiraz",
    country="Iran",
    expertise=["Data Science", "AI/ML", "Data Analytics"],
)

Or simply define your profile using Python dictionaries:

profile = {
    "full_name": "Mehdi Samsami",
    "age": 28,
    "location": "Shiraz, Iran",
    "expertise": ["Data Science", "AI/ML", "Data Analytics"],
    "languages": ["English", "Persian"],
    "tone": "Friendly",
}

Finnaly:

# !pip install clonellm[chroma]
from clonellm import CloneLLM
import os

os.environ["ANTHROPIC_API_KEY"] = "anthropic-api-key"

clone = CloneLLM(
    model="claude-3-opus-20240229",
    documents=documents,
    embedding=embedding,
    vector_store=RagVectorStore.Chroma,
    user_profile=profile,
)

Conversation history (memory)

Enable the memory feature to allow your clone to access to the history of conversation. This is simply done by setting memory argument to True or -1 for infinite memory or an integer greater than zero for a fixed size of memory:

from clonellm import CloneLLM
import os

os.environ["HUGGINGFACE_API_KEY"] = "huggingface-api-key"

clone = CloneLLM(
    model="meta-llama/Llama-2-70b-chat",
    documents=documents,
    embedding=embedding,
    memory=10,  # Enable memory with maximum size of 10
)

Use the memory_size attribute to get the current length of conversation history, i.e., the size of clone memory:

print(clone.memory_size)
# 6

If you needed to clear the history of the conversation, i.e., the clone memory, at any time, you can easily call either of the reset_memory() and clear_memory() methods.

clone.clear_memory()
# clone.reset_memory()

Streaming

CloneLLM supports streaming responses from the LLM, allowing for real-time processing of text as it is being generated, rather than receiving the whole output at once.

from clonellm import CloneLLM, LiteLLMEmbeddings
import os

os.environ["VERTEXAI_PROJECT"] = "hardy-device-28813"
os.environ["VERTEXAI_LOCATION"] = "us-central1"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"

embedding = LiteLLMEmbeddings(model="textembedding-gecko@001")
clone = CloneLLM(model="gemini-1.0-pro", documents=documents, embedding=embedding)

for chunk in clone.stream("Describe yourself in 100 words"):
    print(chunk, end="", flush=True)

Async

CloneLLM provides asynchronous counterparts to its core methods, afit, ainvoke, and astream, enhancing performance in asynchronous programming contexts.

ainvoke

import asyncio
from clonellm import CloneLLM, LiteLLMEmbeddings
from langchain_core.documents import Document
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

async def main():
    documents = [...]
    embedding = LiteLLMEmbeddings(model="text-embedding-ada-002")
    clone = CloneLLM(model="gpt-4o", documents=documents, embedding=embedding)
    await clone.afit()
    response = await clone.ainvoke("Tell me about your skills?")
    return response

response = asyncio.run(main())
print(response)

astream

import asyncio
from clonellm import CloneLLM, LiteLLMEmbeddings
from langchain_core.documents import Document
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

async def main():
    documents = [...]
    embedding = LiteLLMEmbeddings(model="text-embedding-3-small")
    clone = CloneLLM(model="gpt-4o", documents=documents, embedding=embedding)
    await clone.afit()
    async for chunk in clone.astream("How comfortable are you with remote work?"):
        print(chunk, end="", flush=True)

asyncio.run(main())

Support Us

If you find CloneLLM useful, please consider showing your support in one of the following ways:

  • Star our GitHub repository: This helps increase the visibility of our project.
  • 💡 Contribute: Submit pull requests to help improve the codebase, whether it's adding new features, fixing bugs, or improving documentation.
  • 📰 Share: Post about CloneLLM on LinkedIn or other social platforms.

Thank you for your interest in CloneLLM. We look forward to seeing you create your digital twin!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clonellm-0.2.4.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

clonellm-0.2.4-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file clonellm-0.2.4.tar.gz.

File metadata

  • Download URL: clonellm-0.2.4.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for clonellm-0.2.4.tar.gz
Algorithm Hash digest
SHA256 fe834b650f31f95d2b803bf8163d7387efa3a6387e95ec55806ea05e1f665344
MD5 0538f585de9d94fa4f4ea568d5d38454
BLAKE2b-256 81c3afa15e657178d7e140afa311881378e22df05904ab0a8510552ba5ca8415

See more details on using hashes here.

File details

Details for the file clonellm-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: clonellm-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 14.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for clonellm-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 be8711e53ae064361ca2edde1296f5a07073578b7e268be5fe8fc4d9c1f85d64
MD5 4ead0c75c1fcfba1ea4a2b7bab78e6eb
BLAKE2b-256 b1f29f3278019849af978b4fe3faa81adaaebf3281b8676d0658a37750204115

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page