Natural Language Understanding of Network Topology
Project description
Network Language Understanding (NxLU)
NxLU is a framework designed to augment graph analysis and AI reasoning by seamlessly integrating graph topological inference with LLM-generated knowledge queries.
Table of Contents
- Overview
- Key Features
- System Requirements
- Installation
- Usage (Dispatch)
- Enabling NxLU Backend
- Multi-Hop Graph Reasoning
- Architecture
- Usage (Reasoning)
- Contributing
- License
Overview
NxLU bridges the gap between graph-based data analysis and natural language understanding by integrating graph algorithms with AI-powered reasoning using LLMs and NetworkX dispatch.
Key Features
- Dynamic Algorithm Selection: Infers user intent and dynamically selects appropriate graph algorithms to process queries.
- Graph Integration: Integrates the results of graph algorithms with LLMs to generate precise and contextually relevant responses.
- Task-Agnostic Reasoning: Supports a broad spectrum of applications including recommendations, explanations, diagnostics, clustering, ranking, and more.
- Enhanced Decision-Making: Utilizes graph algorithms like clustering, ranking, and matching for advanced data manipulations and analyses.
- Complex Relationship Handling: Leverages intricate relationships and dependencies in graph structures for deeper reasoning.
- Dynamic Contextualization: Adapts its reasoning process to the specific needs of each query, ensuring relevant and accurate outputs.
- NetworkX Backend Integration: Integrates with NetworkX as a backend.
System Requirements
NxLU runs on Python version 3.10 or higher and NetworkX version 3.0 or higher, and requires the following additional non-standard dependencies:
- PyTorch version 2.2 or higher
- Transformers version 4.43 or higher
- Sentence-Transformers version 3.0 or higher
- LangChain version 0.3 or higher
- Llama-Index version 0.11 or higher
- Huggingface-Hub version 0.24 or higher
For a complete list of dependencies, please refer to the pyproject.toml file in the project repository.
Installation
For the default installation of NxLU (using LangChain), run the following command:
pip install nxlu
To leverage the Llamaindex framework, run:
pip install nxlu[llamaindex]
Then, set up your API key:
export ANTHROPIC_API_KEY=YOUR_API_KEY
# or:
export OPENAI_API_KEY=YOUR_API_KEY
Usage (Dispatch)
Enabling NxLU Backend
To use NxLU as a backend for NetworkX, you can use any of the following methods that serve to activate NetworkX's dispatch-plugin mechanism:
-
Environment Variable: Set the
NETWORKX_AUTOMATIC_BACKENDS
environment variable to automatically dispatch to NxLU for supported APIs:export NETWORKX_AUTOMATIC_BACKENDS=nxlu python my_networkx_script.py
-
Backend Keyword Argument: Explicitly specify NxLU as the backend for particular API calls:
import os import networkx as nx nx.config.backends.nxlu.active = True ## set an LLM framework (both LangChain and LlamaIndex are supported, though LangChain is the default) # nx.config.backends.nxlu.set_llm_framework("llamaindex") openai_api_key = os.getenv("OPENAI_API_KEY") nx.config.backends.nxlu.set_openai_api_key(openai_api_key) nx.config.backends.nxlu.set_model_name("gpt-4o-mini") # default ## or # anthropic_api_key = os.getenv("ANTHROPIC_API_KEY") # nx.config.backends.nxlu.set_anthropic_api_key(anthropic_api_key) # nx.config.backends.nxlu.set_model_name("claude-3.5-sonnet") ## or # nx.config.backends.nxlu.set_model_name("llama3:8b") ## Optional configuration parameters # nx.config.backends.nxlu.set_verbosity_level(0) # 0 = No logging, 1 = Info, 2 = Debug # nx.config.backends.nxlu.set_temperature(0.1) # default # nx.config.backends.nxlu.set_max_token(500) # default # nx.config.backends.nxlu.set_num_thread(8) # default # nx.config.backends.nxlu.set_num_gpu(0) # default ## Instantiate a networkx graph G = nx.path_graph(4) ## Invoke networkx algorithm calls as you normally would, but with an additional backend keyword argument nx.betweenness_centrality(G, backend="nxlu")
-
Type-Based Dispatching:
import networkx as nx from nxlu.core.interface import LLMGraph G = nx.path_graph(4) ## Alternatively create an `LLMGraph` object to avoid needing to specify the backend argument H = LLMGraph(G) nx.betweenness_centrality(H)
By integrating with NetworkX's backend system, NxLU provides a seamless way to enhance existing graph analysis workflows with advanced natural language processing and reasoning capabilities.
Multi-Hop Graph Reasoning
Architecture
NxLU's multi-hop graph reasoning mode invokes a multi-hop strategy of "interrogating" a graph's topology (with or without a user query):
- User Intent Detection: Identifies the goal of the user's query using zero-shot classification.
- Graph Characterization: Describes the input graph's domain and structure.
- Graph Algorithm Selection: Predicts the most applicable graph algorithms based on the user's intent and graph context.
- Graph Preprocessing: Applies necessary preprocessing routines to optimize the graph for selected algorithms.
- Graph Algorithm Application: Applies the selected graph algorithms to the preprocessed graph.
- Response Generation: Integrates algorithm results with LLMs to generate structured and contextually relevant responses.
Usage (Reasoning)
In python, first set up the configuration:
import os
import networkx as nx
from nxlu.explanation.explain import GraphExplainer
from nxlu.config import get_config, OpenAIModel, AnthropicModel
config = get_config()
## set an LLM framework (both LangChain and LlamaIndex are supported, though LangChain is the default)
# config.set_llm_framework("llamaindex")
openai_api_key = os.getenv("OPENAI_API_KEY")
config.set_openai_api_key(openai_api_key)
config.set_model_name("gpt-4o-mini") # default
## or
# anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
# config.set_anthropic_api_key(anthropic_api_key)
# config.set_model_name("claude-3.5-sonnet")
## or
# config.set_model_name("llama3:8b")
## Optional configuration parameters
config.set_verbosity_level(1)
config.set_max_tokens(500)
# config.set_temperature(0.1) # default
# config.set_num_thread(8) # default
# config.set_num_gpu(0) # default
## specify networkx algorithms by name to be included or excluded
# config.set_include_algorithms(['betweenness_centrality', 'clustering'])
# config.set_exclude_algorithms(['shortest_path'])
# config.set_enable_subgraph_retrieval(False) # the default is `True`, which enables an experimental FAISS-based subgraph selection mechanism will be used to retrieve a connected smallworld subgraph that captures the most semantically similar nodes and edges to the user's query. If restricted to cpu-only, limited RAM, or working with large dense graphs (e.g. >10,000 nodes), try setting this to `False`.
# config.set_enable_classification(False) # the default is `True`, but if this is set to `False`, the system should rely solely on the include/exclude lists without performing zero-shot classification of the most "suitable" algorithms for the graph + query.
# config.set_enable_resource_constraints(False) # the default is `True`, but if this is set to `False`, the system should ignore the resource constraints detected on the current hardware when determining which algorithms are suitable to run within tractable timeframes given the scale of the graph.
In the following minimal example, we use the GraphExplainer to analyze a social network graph, with or without a specific query. This example shows both cases:
G = nx.Graph()
G.add_edge('Elon Musk', 'Jeff Bezos', weight=30)
G.add_edge('Elon Musk', 'Tim Cook', weight=15)
G.add_edge('Elon Musk', 'Sundar Pichai', weight=12)
G.add_edge('Elon Musk', 'Satya Nadella', weight=20)
G.add_edge('Jeff Bezos', 'Warren Buffet', weight=25)
G.add_edge('Jeff Bezos', 'Bill Gates', weight=10)
G.add_edge('Jeff Bezos', 'Tim Cook', weight=18)
G.add_edge('Tim Cook', 'Sundar Pichai', weight=8)
G.add_edge('Tim Cook', 'Sheryl Sandberg', weight=9)
G.add_edge('Sundar Pichai', 'Bill Gates', weight=6)
G.add_edge('Sundar Pichai', 'Sheryl Sandberg', weight=7)
G.add_edge('Satya Nadella', 'Warren Buffet', weight=15)
G.add_edge('Satya Nadella', 'Sheryl Sandberg', weight=13)
G.add_edge('Bill Gates', 'Warren Buffet', weight=40)
# Initialize the explainer with configuration
explainer = GraphExplainer(config)
# Perform analysis without a specific query
response = explainer.explain(G)
print(response)
# Perform analysis with a specific query
query = "Which executive in this network is the most connected to the other executives?"
response = explainer.explain(G, query)
print(response)
Supported Algorithms for Multi-Hop Graph Reasoning
Algorithm Name |
---|
degree_centrality |
closeness_centrality |
pagerank |
eigenvector_centrality |
load_centrality |
harmonic_centrality |
shortest_path |
voterank |
katz_centrality |
k_core |
all_pairs_shortest_path_length |
all_pairs_dijkstra_path_length |
all_pairs_bellman_ford_path_length |
all_pairs_shortest_path |
all_pairs_dijkstra_path |
all_pairs_bellman_ford_path |
find_cliques_recursive |
average_clustering |
transitivity |
effective_graph_resistance |
degree_assortativity_coefficient |
average_degree_connectivity |
average_neighbor_degree |
girvan_newman |
greedy_color |
best_partition |
resource_allocation_index |
betweenness_centrality |
diameter |
radius |
eccentricity |
articulation_points |
bridges |
average_shortest_path_length |
barycenter |
approximate_current_flow_betweenness_centrality |
current_flow_closeness_centrality |
all_pairs_lowest_common_ancestor |
capacity_scaling |
rich_club_coefficient |
center |
jaccard_coefficient |
adamic_adar_index |
preferential_attachment |
all_node_cuts |
wiener_index |
number_of_spanning_trees |
girth |
all_pairs_dijkstra |
global_efficiency |
local_efficiency |
triadic_census |
greedy_modularity_communities |
k_shell |
percolation_centrality |
closeness_vitality |
node_connectivity |
edge_current_flow_betweenness_centrality |
algebraic_connectivity |
Supported Models
NxLU supports a wide range of language models from different providers, including ollama local models. You can configure NxLU to use one of the following models based on your needs:
OpenAI Models:
- GPT-4 Turbo (gpt-4-turbo)
- GPT-4 (gpt-4)
- GPT-4O (gpt-4o) (gpt-4o-2024-08-06)
- GPT-4O Mini (gpt-4o-mini)
- GPT-4O1 Preview (o1-preview)
- GPT-401 Mini (o1-mini)
Anthropic Models:
- Claude 2 (claude-2)
- Claude 2.0 (claude-2.0)
- Claude Instant (claude-instant)
- Claude Instant 1 (claude-instant-1)
- Claude Instant 1.1 (claude-instant-1.1)
- Claude 3 Sonnet (claude-3-sonnet)
- Claude 3.5 Sonnet (claude-3.5-sonnet)
Local Models:
- Llama 3 - 70B (llama3:70b)
- Llama 3 - 8B (llama3:8b)
- Gemma 2 - 9B (gemma2:9b)
- Qwen 2 - 7B (qwen2:7b)
Contributing
Contributions are welcome! Please open an issue or submit a pull request to discuss potential improvements or features. Before submitting, ensure that you read and follow the CONTRIBUTING guide.
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file nxlu-0.1.47.tar.gz
.
File metadata
- Download URL: nxlu-0.1.47.tar.gz
- Upload date:
- Size: 3.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b80004059df125facf168701de2452c69f5931a53327d0474d414fa484dd494d |
|
MD5 | fd993e7d87e82ce9f01c2af5a96ef671 |
|
BLAKE2b-256 | d8a1fb2d497486b842ff8aa341cee8cfa2f5e0528f6bc208af4d64672019af85 |
File details
Details for the file nxlu-0.1.47-py3-none-any.whl
.
File metadata
- Download URL: nxlu-0.1.47-py3-none-any.whl
- Upload date:
- Size: 385.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6c08769bd42ce232343aac82bc7aba69b31ee1d04de973a0d85b31f5ca4c4b20 |
|
MD5 | 6e60ff99d430d38cac256f7ab63d80ec |
|
BLAKE2b-256 | 286e94bd3bb58993042bd44ceffd516411950f79cf06e827cf2313e00bbcf5f9 |