A Python library for fetching real IP and Port using STUN servers and easily integratable with Python backend frameworks.
Project description
📜 conexia
A Python library for fetching and caching a device's real public IP address using STUN (Session Traversal Utilities for NAT) servers. Supports Redis, SQLite, File-based, and In-Memory caching for fast lookups.
📌 Why Use This?
- Identifies real public IP address even behind NAT.
- Provides multiple cache backends (Redis, SQLite, File, Memory).
- Works in Django, Flask, or standalone Python scripts.
- Automatic caching capability with minimal configuration.
📦 Installation
pip install conexia
or install from source:
git clone https://github.com/paulsonlegacy/conexia.git
cd conexia
pip install .
⚡ Usage
Basic Example
import asyncio
from conexia.core import STUNClient
async def main():
client = STUNClient(cache_backend="file") # Change to "memory", "file", "sqlite", "redis" as needed
user_id = await client.get_user_id()
public_ip = await client.get_public_ip()
public_port = await client.get_public_port()
nat_type = await client.get_nat_type()
print("User ID:", user_id)
print("Public IP:", public_ip)
print("Public Port:", public_port)
print("NAT Type:", nat_type)
# Ensure the script runs asynchronously
if __name__ == "__main__":
asyncio.run(main())
📌 Output (Example)
{
"user_id": "device123",
"data": {
"ip": "192.168.1.10",
"port": 3478,
"nat_type": "Full Cone"
},
"timestamp": 1691234567
}
NB - User ID is optional as it is automatically generated if not provided
🔌 Integrating with Django
Since Django runs on WSGI by default, you need to enable ASGI for async support in Django.
Using ASGI in Django
1️⃣ Ensure you have Django 3.2+ installed.
2️⃣ Create/modify asgi.py in Your Project Root - This file makes Django work asynchronously.
import os
from django.core.asgi import get_asgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "your_project.settings") # Change "your_project"
application = get_asgi_application()
NB - The asgi.py file should be in the same folder as settings.py, which is inside your Django project directory (not the root folder with manage.py).
your_project/ # Django Project Folder
│── manage.py
│── your_project/ # Actual Django Project Package
│ │── __init__.py
│ │── settings.py
│ │── urls.py
│ │── asgi.py # ✅ Place asgi.py here!
│ │── wsgi.py
│ └── ...
│── app1/
│── app2/
│── ...
3️⃣ Install an ASGI server like daphne or uvicorn:
pip install daphne
or
pip install uvicorn
4️⃣ Run Django ASGI server:
For daphne:
daphne -b 0.0.0.0 -p 8000 your_project.asgi:application
For uvicorn:
uvicorn your_project.asgi:application --host 0.0.0.0 --port 8000
5️⃣ Install the package
pip install conexia
6️⃣ Enable the STUN Middleware in settings.py Modify settings.py to activate the middleware and configure caching options:
# settings.py
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
# ✅ Add Conexia Middleware
"conexia.middleware.django.STUNMiddleware",
]
# STUN Configuration
STUN_CACHE_BACKEND = "sqlite" # Options: "memory", "file", "sqlite", "redis"
STUN_CACHE_TTL = 300 # Cache expiry in seconds
7️⃣ Access STUN data inside Django Views Once the middleware is enabled, every request object will have the following attributes:
def sample_view(request):
return JsonResponse({
"original_ip": request.original_ip,
"original_port": request.original_port,
"nat_type": request.nat_type
})
🌐 Integrating with Flask
Flask does not natively support ASGI, but you can enable async support using hypercorn or uvicorn.
Async Support in Flask
1️⃣ Install an ASGI server:
pip install hypercorn
or
pip install uvicorn
2️⃣ Create app.py
from flask import Flask, jsonify
from conexia.core import STUNClient
import asyncio
app = Flask(__name__)
stun_client = STUNClient(backend="redis", ttl=300)
@app.route("/get_ip/<user_id>")
async def get_ip(user_id):
stun_info = await asyncio.to_thread(stun_client.get_stun_info, user_id)
return jsonify(stun_info)
if __name__ == "__main__":
try:
import hypercorn.asyncio
hypercorn.asyncio.serve(app, bind="0.0.0.0:8000")
except ImportError:
app.run(debug=True) # Fallback to sync mode if hypercorn is not installed
3️⃣ Choose how to run the server
Synchronous mode (default Flask WSGI):
python app.py
Asynchronous mode (ASGI using hypercorn):
hypercorn app:app --bind 0.0.0.0:8000
Alternative ASGI server (uvicorn):
uvicorn app:app --host 0.0.0.0 --port 8000
4️⃣ Test API in browser or Postman
http://127.0.0.1:5000/get_ip/device123
✅ Alternative Approach Using Flask Hooks
If you wanted to simulate middleware behavior in Flask, you could use Flask's before_request hook like this:
from flask import Flask, g, request
from conexia.core import STUNClient
import asyncio
app = Flask(__name__)
stun_client = STUNClient(backend="redis", ttl=300)
@app.before_request
async def attach_stun_data():
user_id = request.args.get("user_id", "default_id")
stun_info = await asyncio.to_thread(stun_client.get_stun_info, user_id)
g.stun_info = stun_info # Attach to global request context
@app.route("/get_ip")
async def get_ip():
return jsonify(g.stun_info)
if __name__ == "__main__":
hypercorn.asyncio.serve(app, bind="0.0.0.0:8000")
💾 Available Cache Backends
| Cache Backend | Description |
|---|---|
memory |
Uses in-memory cache (Fast but not persistent). |
file |
Saves cached data in cache.json (Persistent across restarts). |
sqlite |
Uses an SQLite database for efficient storage. |
redis |
Uses Redis for distributed caching. |
NB - Default is file
🔧 Clearing Cache
Clear cache for a specific user ID:
stun_client.clear_cache(user_id="device123")
Clear all cached data:
stun_client.clear_cache()
📜 License
This project is licensed under the MIT License.
👨💻 Contributing
1️⃣ Fork the repository
2️⃣ Clone your fork
git clone https://github.com/paulsonlegacy/conexia.git
cd conexia
3️⃣ Create a feature branch
git checkout -b feature-name
4️⃣ Submit a pull request! 🚀
🙌 Acknowledgments
🎉 This library is dedicated to my mom - Monica A. Bosah, whose support made this possible. ❤️
🚀 Next Steps
- Optional caching for simple tasks
- Support for synchronous and asynchronous for simplicity
- Add other network parameters in fetched stun info
- Stand-alone and environment simulated tests for middlewares
- Support for other python backend frameworks
- Signalling feature
💡 Want More Features?
If you have feature suggestions or bugs, open an issue on GitHub! 🚀
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file conexia-0.1.0.tar.gz.
File metadata
- Download URL: conexia-0.1.0.tar.gz
- Upload date:
- Size: 12.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de8d88ce4f10e3bde938435102d0f979e3ca28abf7dd3e87f2631ba8a4a3fdde
|
|
| MD5 |
dde2a20f14393dad686ec06af40f2ed9
|
|
| BLAKE2b-256 |
56c2b95b268a0fd7f5a7662a1f5653ee1f3ab820da056110e80ae30d5cbed637
|
File details
Details for the file conexia-0.1.0-py3-none-any.whl.
File metadata
- Download URL: conexia-0.1.0-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
49292a6bb5a11643c5de2a9dcbe0c91981a309b2cdb263c5dab251863f701bd9
|
|
| MD5 |
4e8812b883e6bfbb35bb3a1768146066
|
|
| BLAKE2b-256 |
d6e84a458238d0776f5852f4d0c0b4f71a547cf7010effa68ff7ef6e958ad3f4
|