Skip to main content

Kani extension for supporting vision-language models (VLMs). Comes with model-agnostic support for GPT-Vision and LLaVA.

Project description

kani

Test Package Documentation Status PyPI Quickstart in Colab Discord

kani-vision

Installation

To install kani-vision, you must have at least Python 3.10. kani-vision uses extras to provide support for specific models - see below for model-specific instructions and other extras.

You can combine multiple extras into a single command, like pip install "kani-vision[openai,ascii]".

OpenAI (GPT-4V)

$ pip install "kani-vision[openai]"

LLaVA v1.5

Note: to install dependencies for LLaVA, you will have to run the following two commands as the LLaVA package installs some outdated incompatible dependencies by default:

$ pip install "kani-vision[llava]"
$ pip install --no-deps "llava @ git+https://github.com/haotian-liu/LLaVA.git@v1.1.3"

Other Extras

  • pip install "kani-vision[ascii]": When using chat_in_terminal_vision(), this will display any images you provide to the model as ASCII art in your terminal :).

Quickstart

from kani import Kani
from kani.ext.vision import chat_in_terminal_vision
from kani.ext.vision.engines.openai import OpenAIVisionEngine

# add your OpenAI API key here
api_key = "sk-..."
engine = OpenAIVisionEngine(api_key, model="gpt-4-vision-preview", max_tokens=512)
ai = Kani(engine)

# use `!path/to/file.png` to provide an image to the engine, e.g. `Please describe this image: !kani-logo.png`
# or use a URL: `Please describe this image: !https://example.com/image.png`
chat_in_terminal_vision(ai)

Usage

This section assumes that you're already familiar with the basic usage of kani. If not, go check out the kani docs first!

kani-vision provides two main features to extend kani with vision using the message parts API.

Engines

The first are the vision engines, which are the underlying vision-language models (VLMs). kani-vision comes with support for two VLM engines, GPT-4V (OpenAI's hosted model) and LLaVA v1.5 (an open-source extension of Vicuna):

Model Name Extra Capabilities Engine
GPT-4V openai 🛠 📡 kani.ext.vision.engines.openai.OpenAIVisionEngine
LLaVA v1.5 llava [^llava] 🔓 🖥 🚀 kani.ext.vision.engines.llava.LlavaEngine

Legend

  • 🛠: Supports function calling.
  • 🔓: Open source model.
  • 🖥: Runs locally on CPU.
  • 🚀: Runs locally on GPU.
  • 📡: Hosted API.

[^llava]: See the installation instructions. You may also need to install PyTorch manually.

To initialize an engine, you use it the same way as in normal kani! All vision engines are interchangeable with normal kani engines.

Message Part

The second feature you need to be familiar with is the ImagePart, the core way of sending messages to the engine. To do this, when you call the kani round methods (i.e. Kani.chat_round or Kani.full_round or their str variants), pass a list rather than a string:

from kani import Kani
from kani.ext.vision import ImagePart
from kani.ext.vision.engines.llava import LlavaEngine

engine = LlavaEngine("liuhaotian/llava-v1.5-7b")
ai = Kani(engine)

# notice how the arg is a list of parts rather than a single str!
msg = await ai.chat_round_str([
    "Please describe this image:",
    ImagePart.from_path("path/to/image.png")
])
print(msg)

You can also define images from a URL, raw PNG binary or a Pillow Image, using ImagePart.from_url, ImagePart.from_bytes, or ImagePart.from_image, respectively.

See the examples for more.

Terminal Utility

Finally, kani-vision comes with an additional utility to chat with a VLM in your terminal, chat_in_terminal_vision.

This utility allows you to provide images on your disk or on the internet inline by prepending it with an exclamation point:

>>> from kani.ext.vision import chat_in_terminal_vision
>>> chat_in_terminal_vision(ai)
USER: Please describe this image: !path/to/image.png and also this one: !https://example.com/image.png

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kani_vision-0.6.3.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

kani_vision-0.6.3-py3-none-any.whl (17.0 kB view details)

Uploaded Python 3

File details

Details for the file kani_vision-0.6.3.tar.gz.

File metadata

  • Download URL: kani_vision-0.6.3.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for kani_vision-0.6.3.tar.gz
Algorithm Hash digest
SHA256 8e7c53af31725cd71259dcbfad43c84198a949cc4b9df8118654eb2856a8a9c6
MD5 e6eb63e8e0961411f8a064241824d30e
BLAKE2b-256 6f558e61eb408031d684ded7e61f08815578fd67e1ce93ccd40a81da6ac296fb

See more details on using hashes here.

File details

Details for the file kani_vision-0.6.3-py3-none-any.whl.

File metadata

  • Download URL: kani_vision-0.6.3-py3-none-any.whl
  • Upload date:
  • Size: 17.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for kani_vision-0.6.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d8ae9ccf75c18267ef1a7138e7070ded5eab8148ca35077ec190f32a6c96ad37
MD5 51650b3f7657ead68ad510cc2381f7ea
BLAKE2b-256 8e1930f68aeedd070e20ec5c437b095f1c576b2fe0c5f623accc8df86830aa5d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page