The future of AI.
Project description
pip install cosmoai
cosmo
CosmoAI lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with CosmoAI through a ChatGPT-like interface in your terminal by running $ cosmo
after installing.
This provides a natural-language interface to your computer's general-purpose capabilities:
- Create and edit photos, videos, PDFs, etc.
- Control a Chrome browser to perform research
- Plot, clean, and analyze large datasets
- ...etc.
⚠️ Note: You'll be asked to approve code before it's run.
Demo
An interactive demo is also available on Google Colab:
[
Along with an example implementation of a voice interface (inspired by Her):
[
Quick Start
pip install cosmoai
Terminal
After installation, simply run cosmo
:
cosmo
Python
import cosmo
cosmo.chat("Plot AAPL and META's normalized stock prices") # Executes a single command
cosmo.chat() # Starts an interactive chat
Comparison to ChatGPT's Code Interpreter
OpenAI's release of Code Interpreter with GPT-4 presents a fantastic opportunity to accomplish real-world tasks with ChatGPT.
However, OpenAI's service is hosted, closed-source, and heavily restricted:
- No internet access.
- Limited set of pre-installed packages.
- 100 MB maximum upload, 120.0 second runtime limit.
- State is cleared (along with any generated files or links) when the environment dies.
CosmoAI overcomes these limitations by running in your local environment. It has full access to the internet, isn't restricted by time or file size, and can utilize any package or library.
This combines the power of GPT-4's Code Cosmo with the flexibility of your local development environment.
Commands
Update: The Generator Update (0.1.5) introduced streaming:
message = "What operating system are we on?"
for chunk in cosmo.chat(message, display=False, stream=True):
print(chunk)
Interactive Chat
To start an interactive chat in your terminal, either run cosmo
from the command line:
cosmo
Or cosmo.chat()
from a .py file:
cosmo.chat()
You can also stream each chunk:
message = "What operating system are we on?"
for chunk in cosmo.chat(message, display=False, stream=True):
print(chunk)
Programmatic Chat
For more precise control, you can pass messages directly to .chat(message)
:
cosmo.chat("Add subtitles to all videos in /videos.")
# ... Streams output to your terminal, completes task ...
cosmo.chat("These look great but can you make the subtitles bigger?")
# ...
Start a New Chat
In Python, CosmoAI remembers conversation history. If you want to start fresh, you can reset it:
cosmo.reset()
Save and Restore Chats
cosmo.chat()
returns a List of messages, which can be used to resume a conversation with cosmo.messages = messages
:
messages = cosmo.chat("My name is Henry.") # Save messages to 'messages'
cosmo.reset() # Reset cosmo ("Henry" will be forgotten)
cosmo.messages = messages # Resume chat from 'messages' ("Henry" will be remembered)
Customize System Message
You can inspect and configure CosmoAI's system message to extend its functionality, modify permissions, or give it more context.
cosmo.system_message += """
Run shell commands with -y so the user doesn't have to confirm them.
"""
print(cosmo.system_message)
Change your Language Model
CosmoAI uses LiteLLM to connect to hosted language models.
You can change the model by setting the model parameter:
cosmo --model gpt-3.5-turbo
cosmo --model claude-2
cosmo --model command-nightly
In Python, set the model on the object:
cosmo.model = "gpt-3.5-turbo"
Find the appropriate "model" string for your language model here.
Running CosmoAI locally
CosmoAI uses LM Studio to connect to local language models (experimental).
Simply run cosmo
in local mode from the command line:
cosmo --local
You will need to run LM Studio in the background.
- Download https://lmstudio.ai/ then start it.
- Select a model then click ↓ Download.
- Click the ↔️ button on the left (below 💬).
- Select your model at the top, then click Start Server.
Once the server is running, you can begin your conversation with CosmoAI.
(When you run the command cosmo --local
, the steps above will be displayed.)
Note: Local mode sets your
context_window
to 3000, and yourmax_tokens
to 600. If your model has different requirements, set these parameters manually (see below).
Context Window, Max Tokens
You can modify the max_tokens
and context_window
(in tokens) of locally running models.
For local mode, smaller context windows will use less RAM, so we recommend trying a much shorter window (~1000) if it's is failing / if it's slow. Make sure max_tokens
is less than context_window
.
cosmo --local --max_tokens 1000 --context_window 3000
Debug mode
To help contributors inspect CosmoAI, --debug
mode is highly verbose.
You can activate debug mode by using it's flag (cosmo --debug
), or mid-chat:
$ cosmo
...
> %debug true <- Turns on debug mode
> %debug false <- Turns off debug mode
Interactive Mode Commands
In the interactive mode, you can use the below commands to enhance your experience. Here's a list of available commands:
Available Commands:
%debug [true/false]
: Toggle debug mode. Without arguments or withtrue
it enters debug mode. Withfalse
it exits debug mode.%reset
: Resets the current session's conversation.%undo
: Removes the previous user message and the AI's response from the message history.%save_message [path]
: Saves messages to a specified JSON path. If no path is provided, it defaults tomessages.json
.%load_message [path]
: Loads messages from a specified JSON path. If no path is provided, it defaults tomessages.json
.%tokens [prompt]
: (Experimental) Calculate the tokens that will be sent with the next prompt as context and estimate their cost. Optionally calculate the tokens and estimated cost of aprompt
if one is provided. Relies on LiteLLM'scost_per_token()
method for estimated costs.%help
: Show the help message.
Configuration
CosmoAI allows you to set default behaviors using a config.yaml
file.
This provides a flexible way to configure the cosmo without changing command-line arguments every time.
Run the following command to open the configuration file:
cosmo --config
Multiple Configuration Files
CosmoAI supports multiple config.yaml
files, allowing you to easily switch between configurations via the --config_file
argument.
Note: --config_file
accepts either a file name or a file path. File names will use the default configuration directory, while file paths will use the specified path.
To create or edit a new configuration, run:
cosmo --config --config_file $config_path
To have CosmoAI load a specific configuration file run:
cosmo --config_file $config_path
Note: Replace $config_path
with the name of or path to your configuration file.
CLI Example
- Create a new
config.turbo.yaml
filecosmo --config --config_file config.turbo.yaml
- Edit the
config.turbo.yaml
file to setmodel
togpt-3.5-turbo
- Run CosmoAI with the
config.turbo.yaml
configurationcosmo --config_file config.turbo.yaml
Python Example
You can also load configuration files when calling CosmoAI from Python scripts:
import os
import cosmo
currentPath = os.path.dirname(os.path.abspath(__file__))
config_path=os.path.join(currentPath, './config.test.yaml')
cosmo.extend_config(config_path=config_path)
message = "What operating system are we on?"
for chunk in cosmo.chat(message, display=False, stream=True):
print(chunk)
Sample FastAPI Server
The generator update enables CosmoAI to be controlled via HTTP REST endpoints:
# server.py
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import cosmo
app = FastAPI()
@app.get("/chat")
def chat_endpoint(message: str):
def event_stream():
for result in cosmo.chat(message, stream=True):
yield f"data: {result}\n\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
@app.get("/history")
def history_endpoint():
return cosmo.messages
pip install fastapi uvicorn
uvicorn server:app --reload
Safety Notice
Since generated code is executed in your local environment, it can interact with your files and system settings, potentially leading to unexpected outcomes like data loss or security risks.
⚠️ CosmoAI will ask for user confirmation before executing code.
You can run cosmo -y
or set cosmo.auto_run = True
to bypass this confirmation, in which case:
- Be cautious when requesting commands that modify files or system settings.
- Watch CosmoAI like a self-driving car, and be prepared to end the process by closing your terminal.
- Consider running CosmoAI in a restricted environment like Google Colab or Replit. These environments are more isolated, reducing the risks of executing arbitrary code.
There is experimental support for a safe mode to help mitigate some risks.
How Does it Work?
CosmoAI equips a function-calling language model with an exec()
function, which accepts a language
(like "Python" or "JavaScript") and code
to run.
We then stream the model's messages, code, and your system's outputs to the terminal as Markdown.
Contributing
Thank you for your interest in contributing! We welcome involvement from the community.
Please see our Contributing Guidelines for more details on how to get involved.
License
CosmoAI and associated documentation files (the "Software"), are proprietary to [HENRY BLACK], and are confidential and protected under applicable copyright and other laws.
Unauthorized use, reproduction or distribution of the Software or any portion of it, may result in severe civil and criminal penalties, and will be prosecuted to the maximum extent possible under the law.
The Software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement.
In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the Software or the use or other dealings in the Software.
Note: This software is not affiliated with OpenAI.
Having access to a junior programmer working at the speed of your fingertips ... can make new workflows effortless and efficient, as well as open the benefits of programming to new audiences.
— OpenAI's Code Interpreter Release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cosmoai-0.9.1.11-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e63383bd1900c0c6a0fec85534a10d243b15b386d517d5dc9d156bb966067af9 |
|
MD5 | 576216c016e49ab0b038511621cc052f |
|
BLAKE2b-256 | d9e79b1655ab8d2eab1521f78913673a5aa74f5b86720776723ac5ffc993d47f |