Using Local Packet Whisperer (LPW, Chat with PCAP/PCAPNG files locally, privately!
Project description
Local Packet Whisperer (LPW)
A Fun project using Ollama, Streamlit & PyShark to chat with PCAP/PCAG NG files locally, privately!
Features & Background
- 100% local, private PCAP assistant powered by range of local LLMs at your control, powered by Ollama
- Purely based on promp engg without any fancy libraries & dependencies. 100% vanilla
- Uses streamlit for the FE and pyshark for the pcap parsing needs
- Available as a pip installable package. So just pip it away! 😎
- As of v0.2.3, you can also connect LPW to a Ollama server running over a network.
Refer Release History for more details info on what each release contains.
Requirements
-
Download & Install Ollama by referring to instructions according to your OS here
-
Pull any Chat based LLM models to use with LPW.
ollama pull dolphin-mistral:latest
-
If not running the desktop application, Start Ollama Server (refer here)
-
You also need to install
tshark
executable. You could either install the Wireshark Application or simply usebrew install tshark
.⚠️Warning⚠️ If you don't perform this step, you may see below error
TSharkNotFoundException: TShark not found. Try adding its location to the configuration file.
Usage
- Install LPW using pip
pip install lpw
- This will install
lpw
CLI in your machine. Now simply Start or Stop LPW as follows:
lpw {start or stop}
lpw -h #for help
- LPW will automatically fetch the local models from Ollama local repo and populate the dropdown. Select a model to start your test. You can play with more than 1 model to compare the results 😎
- Now upload a PCAP/PCAPNG file.
- You can now start to chat with LPW and ask questions about the packet. Please Note: The performance of LPW depends on the underlying model. So feel free to download as many local LLMs from Ollama and try it. It is fun to see different response 🤩🤩🤩.
(This is a long gif. You will find LLM response at the end of the gif)
- By default PyShark parse the pcap till transport layer. If you want, you can help the LLM to parse application layer by selecting protocol filter in the analysis (just like how we would do in wireshark) .
Local Development
- Clone this repo and install requirements
git clone https://github.com/kspviswa/local-packet-whisperer.git
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- Run streamlit app & point to
http://localhost:8501
streamlit run bin/lpw_main.py
or simply
<lpw dir>/bin/lpw {start or stop}
Contributions
I just created this project based on inspiration from similar project called Packet Buddy which used open AI. But if you find this useful and wanna contribute bug fixes, additional features feel free to do so by raising a PR or open issues for me to fix. I intend to work on this as a hobby unless there is some interest in the community.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file lpw-0.2.4.1-py3-none-any.whl
.
File metadata
- Download URL: lpw-0.2.4.1-py3-none-any.whl
- Upload date:
- Size: 12.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | da08b68d13f413dfb3d60b968591a3e13cfac436de7fc2233dc9f479b80acdfa |
|
MD5 | 2510dc54372bbcd9d4bcc3c3441d9d68 |
|
BLAKE2b-256 | 87f0eeefc95fa512bfe395798d75dbf864671d7a3fc84a1e756e127ffbc3fa2c |