Skip to main content

Gemini - Pytorch

Project description

Multi-Modality

Gemini

gemini

The open source implementation of Gemini, the model that will "eclipse ChatGPT", it seems to work by directly taking in all modalities without an encoder for some kind which means that the encoding is built into the modal.

input sequences {texts, audio, imgs, video} -> [tokens] -> transformer -> conditional decoding for img gen

This architecture looks very similiar to Fuyu's architecture just extended to many modalities, where instead of an vit encoder you just pass in the img embeddings into the transformer.

The token inputs to gemini will most likely be denoted by special modality tokens [IMG] or <img> or [AUDIO] or <audio>

Codi also has conditional generation leverages the tokenized outputs.

To implement this, I plan to cover the img embedding first make sure that works well and then go onto the audio embeddings and then the video.

Install

pip3 install gemini-torch

Usage

Gemini Transformer Usage

  • No multi-modal yet
  • Just language
  • Rope, xpos, alibi, etc, multi grouped queries, qk_norm
import torch 
from gemini_torch import Gemini

# Initialize the model
model = Gemini(
    num_tokens=50432,
    max_seq_len=8192,
    dim=2560,
    depth=32,
    dim_head=128,
    heads=24,
    use_abs_pos_emb=False,
    alibi_pos_bias=True,
    alibi_num_heads=12,
    rotary_xpos=True,
    attn_flash=True,
    attn_kv_heads=2,
    qk_norm=True,
    attn_qk_norm=True,
    attn_qk_norm_dim_scale=True,
)

# Initialize the randint
x = torch.randint(0, 50432, (1, 8192))

# Apply model to y
y = model(x)

# Print logits
print(y)

References

  • Combine Reinforcment learning with modular pretrained transformer, multi-modal capabilities, image, audio,
  • self improving mechanisms like robocat
  • PPO? or MPO
  • get good at backtracking and exploring alternative paths
  • speculative decoding
  • Algorithm of Thoughts
  • RLHF
  • Gemini Report
  • Gemini Landing Page

Todo

  • Implement the img feature embedder and align imgs with text and pass into transformer
  • Implement the audio processing by making an audio processor that intakes in audio embeddings and reshapes it to match language embeddings dimension shape [B, SEQLEN, Dim]
  • Do the same for video

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gemini_torch-0.0.2.tar.gz (19.5 kB view details)

Uploaded Source

Built Distribution

gemini_torch-0.0.2-py3-none-any.whl (18.8 kB view details)

Uploaded Python 3

File details

Details for the file gemini_torch-0.0.2.tar.gz.

File metadata

  • Download URL: gemini_torch-0.0.2.tar.gz
  • Upload date:
  • Size: 19.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.2 CPython/3.11.0 Darwin/22.4.0

File hashes

Hashes for gemini_torch-0.0.2.tar.gz
Algorithm Hash digest
SHA256 b7ec721dbbf64553ed7fd0218582face09cf51138550a254b745a313f3e9f575
MD5 f0efb60b5ad433f719dcbfbf75274dcc
BLAKE2b-256 c4dc06a2fb4da058bda75541e893fe86fa9766848df519ac111ce88e51a5e41f

See more details on using hashes here.

File details

Details for the file gemini_torch-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: gemini_torch-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 18.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.2 CPython/3.11.0 Darwin/22.4.0

File hashes

Hashes for gemini_torch-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 13fcb042c53ebcc548ef01073c3eba4ef9598c94e4c14f842555519709c86eac
MD5 c90f77f2b699052cb245afede3583f93
BLAKE2b-256 d8e1aaa54773ac83d8292f80d9992c10a8d747747185c83494d2fcfc118a944d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page