AI Powered Ethical Hacking Assistant
Project description
Neutron
Welcome to Neutron.
Galaxy
Acknowledgement
First i would like to thank the All-Mighty God who is the source of all knowledge, without Him, this would not be possible.
Disclaimer: AI can make mistakes, consider cross-checking suggestions.
🌐 Introducing Nebula Pro: A New Era in Ethical Hacking 🌐
🚀 We're thrilled to unveil a sneak peek of Nebula Pro, our latest innovation designed to empower ethical hackers with advanced, AI-driven capabilities. After months of dedicated development, we have launched the preview version. Some of the exciting features are:
- AI Powered Autonomous Mode
- AI Powered Suggestions
- AI Powered Note Taking
Neutron is now a part of Nebula Pro's free tier
📺 Click Here to Get Access To Nebula Pro Now 🚀
Why Neutron?
The purpose of Neutron is straightforward: to provide security professionals with access to a free AI assistant that can be invoked directly from their command line interface. It was built as part of the free tier of Nebula Pro.
Click Here to Watch Neutron in Action
Compatibility
Neutron has been extensively tested and optimized for Linux platforms. As of now, its functionality on Windows or macOS is not guaranteed, and it may not operate as expected.
System dependencies
-
Storage: A minimum of 50GB is recommended.
-
RAM: A minimum of 32GB RAM memory is recommended.
-
At least 12GB of GPU is recommended, a minimum of 8GB is required
-
Graphics Processing Unit (GPU) (NOT MANDATORY, Neutron can run on CPU): While not mandatory, having at least 24GB of GPU memory is recommended for optimal performance.
PYPI based distribution requirement(s)
- Python3: Version 3.10 or later is required for compatibility with all used libraries.
- PyTorch: A machine learning library for Python, used for computations and serving as a foundation for the Transformers library.
- Transformers library by Hugging Face: Provides state-of-the-art machine learning techniques for natural language processing tasks. Required for models and tilities used in NLP operations.
- FastAPI: A modern, fast (high-performance) web framework for building APIs with Python 3.7+ based on standard Python type hints.
- Uvicorn: An ASGI server for Python, needed to run FastAPI applications.
- Pydantic: Data validation and settings management using Python type annotations, utilized within FastAPI applications.
- Langchain Community and Core libraries : Utilized for specific functionalities related to embeddings, vector stores, and more in the context of language processing.
- Regular Expressions (re module in Python Standard Library): Utilized for string operations
- Requests library
To install the above dependencies:
pip install fastapi uvicorn pydantic torch transformers regex argparse typing-extensions langchain_community langchain_core
PIP:
pip install neutron-ai
Upgrading
For optimal performance and to ensure access to the most recent advancements, we consistently release updates and refinements to our models. Neutron will proactively inform you of any available updates to the package or the models upon each execution.
PIP:
pip install neutron-ai --upgrade
Usage.
Server
usage: server.py [-h] [--host HOST] [--port PORT]
Run the FastAPI server.
options:
-h, --help show this help message and exit
--host HOST The hostname to listen on. Default is 0.0.0.0.
--port PORT The port of the webserver. Default is 8000.
The server can be invoked using the following command after installation using pip:
neutron-server 0.0.0.0 8000
Client
usage: client.py [-h] [--server_url SERVER_URL] question
Send a question to the AI server.
positional arguments:
question The question to ask the AI server.
options:
-h, --help show this help message and exit
--server_url SERVER_URL
The URL of the AI server, defaults to http://localhost:8000
The client can be invoked using the following command after installation using pip:
neutron-client "your question"
To use Neutron AI directly from the command line using a shorter alias for example AN, add the following function to your .bashrc or .zshrc:
function AN() {
local query="$*"
neutron-client "$query"
}
After adding, restart your terminal or run 'source ~/.bashrc' (or 'source ~/.zshrc') to apply the changes.
Then after starting the server, you can ask your questions like so:
AN your question
Authentication
To use a password for the server and client simply add it to your ENVs before you run the server and the client like so:
export NEUTRON_TOKEN="YOUR_SECRET"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file neutron_ai-1.0.0b7.tar.gz
.
File metadata
- Download URL: neutron_ai-1.0.0b7.tar.gz
- Upload date:
- Size: 13.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7dfe59533cd3c8be96aa19c37cc27e64d8242a9d1826808a00ec62f861b50837 |
|
MD5 | dbfb27f136dd822572e5a03b20698b48 |
|
BLAKE2b-256 | e4ca62a7249582a44198839b7ed90855bef66e8bcd9770a65076f86a6f130a46 |
File details
Details for the file neutron_ai-1.0.0b7-py3-none-any.whl
.
File metadata
- Download URL: neutron_ai-1.0.0b7-py3-none-any.whl
- Upload date:
- Size: 12.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8d45a97aa2c5ff610538fe2dee842e18eb06633575131cd84037ee8d432d12dc |
|
MD5 | b7b767f228a10d13344730886e9c92d4 |
|
BLAKE2b-256 | 1c0de1f3731e140cea2b7367c03fefa08be130d7c2774a0ae5ce5445b051196b |