Skip to main content

AI Powered Ethical Hacking Assistant

Project description

Neutron

Welcome to Neutron.

Neutron

Galaxy

Acknowledgement

First i would like to thank the All-Mighty God who is the source of all knowledge, without Him, this would not be possible.

Disclaimer: AI can make mistakes, consider cross-checking suggestions.

🌐 Introducing Nebula Pro: A New Era in Ethical Hacking 🌐

🚀 We're thrilled to unveil a sneak peek of Nebula Pro, our latest innovation designed to empower ethical hackers with advanced, AI-driven capabilities. After months of dedicated development, we have launched the preview version. Some of the exciting features are:

  • AI Powered Autonomous Mode
  • AI Powered Suggestions
  • AI Powered Note Taking

Neutron is now a part of Nebula Pro's free tier

📺 Click Here to Get Access To Nebula Pro Now 🚀

Why Neutron?

The purpose of Neutron is straightforward: to provide security professionals with access to a free AI assistant that can be invoked directly from their command line interface. It was built as part of the free tier of Nebula Pro.

Click Here to Watch Neutron in Action

Compatibility

Neutron has been extensively tested and optimized for Linux platforms. As of now, its functionality on Windows or macOS is not guaranteed, and it may not operate as expected.

System dependencies

  • Storage: A minimum of 50GB is recommended.

  • RAM: A minimum of 32GB RAM memory is recommended.

  • At least 12GB of GPU is recommended, a minimum of 8GB is required

  • Graphics Processing Unit (GPU) (NOT MANDATORY, Neutron can run on CPU): While not mandatory, having at least 24GB of GPU memory is recommended for optimal performance.

PYPI based distribution requirement(s)

  • Python3: Version 3.10 or later is required for compatibility with all used libraries.
  • PyTorch: A machine learning library for Python, used for computations and serving as a foundation for the Transformers library.
  • Transformers library by Hugging Face: Provides state-of-the-art machine learning techniques for natural language processing tasks. Required for models and tilities used in NLP operations.
  • FastAPI: A modern, fast (high-performance) web framework for building APIs with Python 3.7+ based on standard Python type hints.
  • Uvicorn: An ASGI server for Python, needed to run FastAPI applications.
  • Pydantic: Data validation and settings management using Python type annotations, utilized within FastAPI applications.
  • Langchain Community and Core libraries : Utilized for specific functionalities related to embeddings, vector stores, and more in the context of language processing.
  • Regular Expressions (re module in Python Standard Library): Utilized for string operations
  • Requests library

To install the above dependencies:

pip install fastapi uvicorn pydantic torch transformers regex argparse typing-extensions langchain_community langchain_core

PIP:

pip install neutron-ai

Upgrading

For optimal performance and to ensure access to the most recent advancements, we consistently release updates and refinements to our models. Neutron will proactively inform you of any available updates to the package or the models upon each execution.

PIP:

pip install neutron-ai --upgrade

Usage.

Server

usage: server.py [-h] [--host HOST] [--port PORT]

Run the FastAPI server.

options:
  -h, --help   show this help message and exit
  --host HOST  The hostname to listen on. Default is 0.0.0.0.
  --port PORT  The port of the webserver. Default is 8000.

The server can be invoked using the following command after installation using pip:

neutron-server 0.0.0.0 8000

Client

usage: client.py [-h] [--server_url SERVER_URL] question

Send a question to the AI server.

positional arguments:
  question              The question to ask the AI server.

options:
  -h, --help            show this help message and exit
  --server_url SERVER_URL
                        The URL of the AI server, defaults to http://localhost:8000

The client can be invoked using the following command after installation using pip:

neutron-client "your question"

To use Neutron AI directly from the command line using a shorter alias for example AN, add the following function to your .bashrc or .zshrc:

function AN() {
    local query="$*"
    neutron-client "$query"
}

After adding, restart your terminal or run 'source ~/.bashrc' (or 'source ~/.zshrc') to apply the changes.

Then after starting the server, you can ask your questions like so:

AN your question

Authentication

To use a password for the server and client simply add it to your ENVs before you run the server and the client like so:

export NEUTRON_TOKEN="YOUR_SECRET"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neutron-ai-1.0.0b6.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

neutron_ai-1.0.0b6-py3-none-any.whl (12.6 kB view details)

Uploaded Python 3

File details

Details for the file neutron-ai-1.0.0b6.tar.gz.

File metadata

  • Download URL: neutron-ai-1.0.0b6.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for neutron-ai-1.0.0b6.tar.gz
Algorithm Hash digest
SHA256 b33c5938b4ba806fed1bfe89220c66c8775d01e52854f844e228ef1f0b45b90d
MD5 f51c9c4799ee0967c2b481db90be0894
BLAKE2b-256 67c225e7cf78f4e6133fa1b31752135c7d87db8d053d81719557144d657698c8

See more details on using hashes here.

File details

Details for the file neutron_ai-1.0.0b6-py3-none-any.whl.

File metadata

  • Download URL: neutron_ai-1.0.0b6-py3-none-any.whl
  • Upload date:
  • Size: 12.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for neutron_ai-1.0.0b6-py3-none-any.whl
Algorithm Hash digest
SHA256 1b6e7573886892596d1fd76f204566ee9dab03e7c03b34354de9c7ab4897647f
MD5 288d81944b30d4f1b2cd69df4b641551
BLAKE2b-256 ef0502b3343ffc82fb72d3f8868fd10123fb6fb86b23a9bd20d368ca7afcb834

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page