Skip to main content

Owlsight is a commandline tool which combines open-source AI models with Python functionality to create a powerful AI assistant.

Project description

Owlsight

Owlsight is a command-line tool that combines Python programming with open-source language models. It offers an interactive interface that allows you to execute Python code, shell commands, and use an AI assistant in one unified environment. This tool is ideal for those who want to integrate Python with generative AI capabilities.

Why owlsight?

Picture this: you are someone who dabbles in Python occasionally. Or you are a seasoned Pythonista. You frequently use generative AI to accelerate your workflow, especially for generating code. But often, this involves a tedious process—copying and pasting code between ChatGPT and your IDE, repeatedly switching contexts.

What if you could eliminate this friction?

Owlsight brings Python development and generative AI together, streamlining your workflow by integrating them into a single, unified platform. No more toggling between windows, no more manual code transfers. With Owlsight, you get the full power of Python and AI, all in one place—simplifying your process and boosting productivity.

Generate code directly from model prompts and access this code directly from the Python interpreter. Or augment model-prompts with Python expressions. With this functionality, open-source models do not only generate more accurate responses by executing Python code directly, but they can also solve way more complex problems.

Features

  • Interactive CLI: Choose from multiple commands such as Python, shell, and AI model queries.
  • Python Integration: Switch to a Python interpreter and use python expressions in language model queries.
  • Model Flexibility: Supports models in pytorch, ONNX, and GGUF formats.
  • Customizable Configuration: Easily modify model and generation settings.
  • Retrieval Augmented Generation (RAG): Enrich prompts with documentation from Python libraries.

Installation

You can install Owlsight using pip:

pip install owlsight

By default, only the transformers library is installed for working with language models.

To add GGUF functionality:

pip install owlsight[gguf]

To add ONNX functionality:

pip install owlsight[onnx]

To install all packages:

pip install owlsight[all]

Usage

After installation, launch Owlsight in the terminal by running the following command:

owlsight

This will present you (together with some giant ASCII-art of an owl) with the mainmenu:

Make a choice:
> how can I assist you?
shell
python
config: main
save
load
clear history
quit

Start out by going to config > model and set a model_id to load a model locally or from https://huggingface.co/

Available Commands

  • How can I assist you: Ask a question or give an instruction.
  • shell : Execute shell commands.
  • python : Enter a Python interpreter.
  • config: main : Modify the main, model , generate or rag configuration settings.
  • save/load : Save or load a configurationfile.
  • clear history : Clear the chathistory, python interpreter history and autocomplete history.
  • quit : Exit the application.

Example Workflow

You can combine Python variables with language models in Owlsight. For example:

python > a = 42
How can I assist you? > How much is {{a}} * 5?
answer -> 210

Additionally, you can also ask a model to write pythoncode and access that in the python interpreter.

From a model response, all generated python code will be extracted and can be edited or executed afterwards. This choice is always optional. After execution, the defined objects will be saved in the global namespace of the python interpreter for the remainder of the current active session. This is a powerful feature, which allows build-as-you-go for a wide range of tasks.

Example:

How can I assist you? > Can you write a function which reads an excelfile?

-> model writes a function called read_excel

python > excel_data = read_excel("path/to/excel")

Python interpreter

Next to the fact that objects generated by model-generated code can be accessed, the Python interpreter also has some useful default functions, starting with the "owl_" suffix.

These are:

  • owl_import(file_path: str) Import a Python file and load its contents into the current namespace.
  • owl_read(file_path: str) Read the content of a text file.
  • owl_scrape(url_or_terms: str, trim_newlines: int = 2, filter_by: Optional[dict], request_kwargs: dict) Scrape the text content of a webpage or search Bing and return the first result as a string.
    • url_or_terms: Webpage URL or search term.
    • trim_newlines: Max consecutive newlines (default 2).
    • filter_by: Dictionary specifying HTML tag and/or attributes to filter specific content.
    • **request_kwargs: Additional options for requests.get.
  • owl_show(docs: bool = False) Display all imported objects (optional: include docstrings).
  • owl_write(file_path: str, content: str) Write content to a text file.
  • owl_history(to_string: bool = False) Get chathistory with current model.

Configurations

Owlsight uses a configuration file in JSON-format to adjust various parameters. The configuration is divided into four main sections: main, model, generate and rag. Here's an overview of the key configuration options:

Main Configuration

  • max_retries_on_error: The maximum number of retries to attempt when an error occurs during code execution (default: 3).
  • prompt_retry_on_error: Whether to prompt the user before executing code which comes from trying to fix an error (default: false)
  • prompt_code_execution: Whether to prompt the user before executing code (default: true).
  • extra_index_url: An additional URL to use for package installation, useful for custom package indexes.

Model Configuration

  • model_id: The ID of the model to use, either locally stored or from the Hugging Face model hub.
  • save_history: Whether to save the conversation history (default: false).
  • system_prompt: The prompt defining the model's behavior, role, and task.
  • transformers__device: The device to use for the transformers model.
  • transformers__quantization_bits: The number of bits for quantization of the transformers model.
  • gguf__filename: The filename of the GGUF model (required for GGUF models).
  • gguf__verbose: Whether to print verbose output for the GGUF model.
  • gguf__n_batch: Increase the batch size for a faster inference, but it may require more memory. gguf__n_cpu_threads: Increase the number of CPU threads for a faster inference if multiple cpu cores are available.
  • gguf__n_ctx: The total context length for the GGUF model.
  • onnx__tokenizer: The tokenizer to use for the ONNX model (required for ONNX models).
  • onnx__verbose: Whether to print verbose output for the ONNX model.

Generate Configuration

  • stopwords: A list of words where the model should stop generating text.
  • max_new_tokens: The maximum number of tokens to generate (default: 512).
  • temperature: The temperature for text generation. Higher values result in more random text (default: 0.0).
  • generation_kwargs: Additional keyword arguments for text generation.

RAG Configuration

  • active: Whether to add RAG search results to the model input (default: false). If true, the search_query results will be added as context to the modelprompt.
  • target_library: The Python library documentation to apply RAG to.
  • top_k: The number of search results to return.
  • search_query: The search query to use for RAG. When ENTER is pressed and active is true, the search results can be seen directly in the console.

Here's an example of what the default configuration looks like:

{
    "main": {
        "max_retries_on_error": 3,
	"prompt_retry_on_error": false,
        "prompt_code_execution": true,
        "extra_index_url": ""
    },
    "model": {
        "model_id": "",
        "save_history": false,
        "system_prompt": "# ROLE:\nYou are an advanced problem-solving AI with expert-level knowledge in various programming languages, particularly Python.\n\n# TASK:\n- Prioritize Python solutions when appropriate.\n- Present code in markdown format.\n- Clearly state when non-Python solutions are necessary.\n- Break down complex problems into manageable steps and think through the solution step-by-step.\n- Adhere to best coding practices, including error handling and consideration of edge cases.\n- Acknowledge any limitations in your solutions.\n- Always aim to provide the best solution to the user's problem, whether it involves Python or not.",
        "transformers__device": null,
        "transformers__quantization_bits": null,
        "gguf__filename": "",
        "gguf__verbose": false,
        "gguf__n_ctx": 512,
        "gguf__n_gpu_layers": 0,
        "gguf__n_batch": 512,
        "gguf__n_cpu_threads": 1,
        "onnx__tokenizer": "",
        "onnx__verbose": false,
        "onnx__num_threads": 1
    },
    "generate": {
        "stopwords": [],
        "max_new_tokens": 512,
        "temperature": 0.0,
        "generation_kwargs": {}
    },
    "rag": {
        "active": false,
        "target_library": "",
        "top_k": 3,
        "search_query": ""
    }
}

Configuration files can be saved and loaded through the main menu.

Changing configurations

To update a configuration, simply modify the desired value and press ENTER to confirm the change. Please note that only one configuration setting can be updated at a time, and the change will only take effect once ENTER has been pressed.

Temporary environment

During an Owlsight session, a temporary environment is created within the "site-packages" directory of the active (virtual) environment. Any packages installed during the session are removed when the session ends, ensuring your environment remains clean. If you want to persist installed packages, simply install them outside of Owlsight.

Error Handling and Auto-Fix

Owlsight automatically tries to fix and retry any code that encounters a ModuleNotFoundError by installing the required package and re-executing the code. It can also attempt to fix errors in its own generated code. This feature can be controlled by the max_retries_on_error parameter in the configuration file.

API

Owlsight can also be used as a library in Python scripts. The main classes are the TextGenerationProcessor family, which can be imported from the owlsight package. Here's an example of how to use it:

from owlsight import TextGenerationProcessorGGUF
# If you want to use another type of model, you can import the other classes: TextGenerationProcessorONNX, TextGenerationProcessorTransformers

processor = TextGenerationProcessorGGUF(
    model_id=r"path\to\Phi-3-mini-128k-instruct.Q5_K_S.gguf",
)

question = "What is the meaning of life?"

for token in processor.generate_stream(question):
    print(token, end="", flush=True)

RELEASE NOTES

1.0.2

  • Enhanced cross-platform compatibility.
  • Introduced the generate_stream method to all TextGenerationProcessor classes.
  • Various minor bug fixes.

1.1.0

  • Added Retrieval Augmented Generation (RAG) for enriching prompts with documentation from python libraries. This option is also added to the configuration.
  • History with autocompletion is now also available when writing prompts. Prompts can be autocompleted with TAB.

1.2.1

  • Access backend functionality through the API using "from owlsight import ..."
  • Added default functions to the Python interpreter, starting with the "owl_" suffix.
  • More configurations available when using GGUF models from the command line.

1.3.0

  • Add owl_history function to python interpeter for directly accessing model chathistory.
  • Improved validation when loading a configurationfile.
  • Added validation for retrying a codeblock from an error. This configuration is called prompt_retry_on_error

1.4.0

  • improve RAG possibilities in the API, added SentenceTransformerSearch, TFIDFSearch and HashingVectorizerSearch as classes.
  • Added search_documents to offer a general RAG solution for documents.
  • Added caching possibility to all RAG solutions in the API, where documents, embeddings etc. get pickled. This can save a big amount of time if amount of documents is large.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

owlsight-1.4.0.tar.gz (537.1 kB view details)

Uploaded Source

Built Distribution

owlsight-1.4.0-py3-none-any.whl (57.6 kB view details)

Uploaded Python 3

File details

Details for the file owlsight-1.4.0.tar.gz.

File metadata

  • Download URL: owlsight-1.4.0.tar.gz
  • Upload date:
  • Size: 537.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.3

File hashes

Hashes for owlsight-1.4.0.tar.gz
Algorithm Hash digest
SHA256 36c0a56eafb4999cc5582cec7046982146e6b7d5e8fc5156e2119226fc118a7e
MD5 a8bd47b2f58e5c04a9e96ad1907aa726
BLAKE2b-256 88fd7b948b6e212034ea20aba883688b4d0a14e7ef562e5566ac6431c2af8a2d

See more details on using hashes here.

File details

Details for the file owlsight-1.4.0-py3-none-any.whl.

File metadata

  • Download URL: owlsight-1.4.0-py3-none-any.whl
  • Upload date:
  • Size: 57.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.3

File hashes

Hashes for owlsight-1.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4d73d3dbe73731bd926aa78117cd7f0d53018a3807e29cdba2d5dfc1ecac673e
MD5 c9550c414d0e5d67dcb199dca11785a0
BLAKE2b-256 60bdbd6ea463e8ff1f85e573c5e04293f9b5c88c7476a038567d1acfbd14f50f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page