Skip to main content

AI, Inside your Editor.

Project description

uniteai

UniteAI: Voice-to-text, Local LLM, and GPT, right in your editor.


Package version

Requirements: Python 3

Editor: VSCode(ium) or Emacs or any editor with LSP capabilities (most).

Screencast Demo

screencast.webm

The Vision

AIs, Why?

As we integrate more technology into our lives, it's becoming clear that our interactions with these systems will be more and more AI-mediated. This project envisions a future where:

  1. Creation: We co-create code, books, emails, work outputs, and more with AI.
  2. Management: AI aids in task and process handling.
  3. Learning: We learn and explore new concepts with AI.
  4. Entertainment: Our leisure times are enhanced through AI interaction.

This project hopes build A Good Interface.

But why this project?

  • The Human-AI Team: feed off each others' strengths

    AI You
    Knows what you want
    Doesn't hallucinate
    Knows a ton
    Thinks super fast
    Easy to automate tasks
  • One-for-All AI Environment: get your AI Stack in one environment, get synergy among the tools.

  • Self-hosted AI Stack: more control, better security and customization.

  • High speed communication: Ultimate man-machine flow needs high-speed communication. Symbolic language is best served in a text-editor environment. Natural language integrates seamlessly via voice-to-text.

  • Conclusion: Let's get a local AI stack cozy inside a text editor.

Quickstart, installing Everything on Ubuntu

You can install more granularly than everything, but we'll demo everything first.

The only platform-dependent dependency right now is portaudio, which I mention in the next section how to install for linux/mac.

1.) Get: uniteai_lsp.

sudo apt install portaudio19-dev
pip install uniteai[all]
uniteai_lsp

It will prompt if it should make a default .uniteai.yml config for you. Update your preferences, including your OpenAI API key if you want that, and which local language model or transcription models you want.

2.) Optional: Then start the longlived LLM server which offers your editor a connection to your local large language model.

uniteai_llm

3.) Install in your editor:

  • For VSCode get the uniteai extension. Eg in VSCode, Ctrl-P then ext install uniteai.uniteai .

  • For VSCodium, VSCode Marketplace files are not compatible, so you'll need to either:

    • Download the prepackaged uniteai.vsix extension, then:

      codium --install-extension clients/vscode/uniteai.vsix
      
    • DIY:

      npm install -g @vscode/vsce
      git clone https://github.com/freckletonj/uniteai
      cd uniteai/clients/vscode
      vsce package
      codium --install-extension uniteai-version.vsix
      
  • For Emacs, copy the lsp-mode config to your init.el.

  • For other editors with LSP support (most do), we just need to copy the emacs/vscode configuration, and translate it to your editor. Please submit a PR with new editor configs!

Granular installs

Still refer to the Quickstart section for the main workflow, such as calling uniteai_lsp to get your default config made.

Your config determines what modules/features are loaded.

The following makes sure to get your dependencies for each feature.

Transcription dependencies

# Debian/Ubuntu
sudo apt install portaudio19-dev  # needed by PyAudio

# Mac
brew install portaudio  # needed by PyAudio

pip install uniteai[transcription]

Local LLM dependencies

pip install uniteai[local_llm]

OpenAI/ChatGPT dependencies

pip install uniteai[openai]

Keycombos

Your client configuration determines this, so if you are using the example client config examples in ./clients:

VSCode Emacs Effect
M-' Show Code Actions Menu
Ctrl-Alt-g C-c l g Send region to GPT, stream output to text buffer
Ctrl-Alt-c C-c l c Same, but ChatGPT
Ctrl-Alt-l C-c l l Same, but Local (eg Falcon) model
Ctrl-Alt-v C-c l v Start voice-to-text
Ctrl-Alt-s C-c l s Whatevers streaming, stop it

I'm still figuring out what's most ergonomic, so, I'm accepting feedback.

Contributions

Why?

Because there are so many cool tools to yet be added:

  • Image creation, eg: "Write a bulleted plan for a Hero's Journey story about X, and make an image for each scene."

  • Contextualize the AI via reading my emails via POP3, and possibly responding, eg: "what was that thing my accountant told me not to forget?"

  • Ask my database natural language questions, eg: "what were my top 10% customers' top 3 favorite products?"

  • Write-ahead for tab-completion, eg: "Once upon a ____".

  • Chat with a PDF document, eg: "what do the authors mean by X?"

  • Do some searches, scrape the web, and upload it all into my db.

  • Sky's the limit.

How?

A Key goal of this project is to be Contributor-Friendly.

  • Make an Issue with your cool concept, or bug you found.

  • .todo/ is a directory of community "tickets", eg .todo/042_my_cool_feature.md. Make a ticket or take a ticket, and make a PR with your changes!

  • ./todo/README.md gives some overview of the library, and advice on building against this library.

  • a ./contrib directory is where you can add your custom feature. See ./uniteai/contrib/example.py.

  • .uniteai.yml configuration chooses which modules to load/not load.

  • The code is well-documented, robust, and simple, to reduce friction.

  • Adding a feature is as simple as writing some python code, and making use of uniteai's library to directly handle issues like concurrency and communicating/modifying the text editor.

Misc

Notes on Local LLMs

The file ./llm_server.py launches a TCP server in which the LLM weights are booted up. The lsp_server will make calls to this llm_server.

The reason is that the lsp_server lifecycle is (generally*) managed by the text editor, and LLM models can be really slow to boot up. Especially if you're developing a feature, you do not want the LLM to keep being read into your GPU each time you restart the lsp_server.

* you don't have to let the editor manage the lsp_server. For instance, eglot in emacs allows you to launch it yourself, and then the editor client can just bind to the port.

Falcon LLM Issue:

If Falcon runs on multiple threads, its cache has an issue. You need a separate modelling_RW.py that makes sure it never tries to cache. https://github.com/h2oai/h2ogpt/pull/297

Replacing cos_sim with this seems to do the trick:

def cos_sin(
    self,
    seq_len: int,
    device="cuda",
    dtype=torch.bfloat16,
) -> torch.Tensor:
    t = torch.arange(seq_len, device=device).type_as(self.inv_freq)
    freqs = torch.einsum("i,j->ij", t, self.inv_freq)
    emb = torch.cat((freqs, freqs), dim=-1).to(device)

    if dtype in [torch.float16, torch.bfloat16]:
        emb = emb.float()

    cos_cached = emb.cos()[None, :, :]
    sin_cached = emb.sin()[None, :, :]

    cos_cached = cos_cached.type(dtype)
    sin_cached = sin_cached.type(dtype)

    return cos_cached, sin_cached

A separate bitsandbytes issue remains unresolved, but is less serious than the above. https://github.com/h2oai/h2ogpt/issues/104 https://github.com/TimDettmers/bitsandbytes/issues/162

License

Copyright (c) Josh Freckleton. All rights reserved.

Licensed under the Apache-2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

uniteai-0.1.9.tar.gz (25.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

uniteai-0.1.9-py3-none-any.whl (34.9 kB view details)

Uploaded Python 3

File details

Details for the file uniteai-0.1.9.tar.gz.

File metadata

  • Download URL: uniteai-0.1.9.tar.gz
  • Upload date:
  • Size: 25.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for uniteai-0.1.9.tar.gz
Algorithm Hash digest
SHA256 6934881fcd54bbac5d911ac6cf2e4fa17607fcdf6f7b609144d4e7bdc5cdf98d
MD5 2da7c34a18ab5e147f9a1bc1b0344431
BLAKE2b-256 eed669541a4891781b255df92ded06de7ebb8bd19110ff041dc63c0f0dc01ab3

See more details on using hashes here.

File details

Details for the file uniteai-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: uniteai-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 34.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for uniteai-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 6a7f19f2d569ea084d21c8ab496652e717c8cb1ebf395a8a8af89ee9f9baa0ca
MD5 64870b18772517d8db7fb9bbe9867bbe
BLAKE2b-256 199e468a51bf404f659237896fcd849eb0828c9053108d1a1cfb2da2d3b3d9dd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page