Streams Explorer.
Project description
Streams Explorer
Explore Data Pipelines in Apache Kafka.
Contents
Features
- Visualization of streaming applications, topics, and connectors
- Monitor all or individual pipelines from multiple namespaces
- Inspection of Avro schema from schema registry
- Integration with streams-bootstrap and faust-bootstrap for deploying Kafka Streams applications
- Real-time metrics from Prometheus (consumer lag & read rate, replicas, topic size, messages in & out per second, connector tasks)
- Linking to external services for logging and analysis, such as Kibana, Grafana, AKHQ, Elasticsearch
- Customizable through Python plugins
Overview
Visit our introduction blogpost for a complete overview and demo of Streams Explorer.
Installation
Docker Compose
- Forward the ports to Kafka Connect, Schema Registry, and Prometheus. (other integrations are optional)
- Start the container
docker-compose up
Once the container is started visit http://localhost:3000
Deploying to Kubernetes cluster
- Add the Helm chart repository
helm repo add streams-explorer https://raw.githubusercontent.com/bakdata/streams-explorer/master/helm-chart/
- Install
helm upgrade --install --values helm-chart/values.yaml streams-explorer
Standalone
Backend
- Install dependencies
pip install -r requirements.txt
- Forward the ports to Kafka Connect, Schema Registry, and Prometheus. (other integrations are optional)
- Configure the backend in settings.yaml.
- Start the backend server
uvicorn main:app
Frontend
- Install dependencies
npm install
- Start the frontend server
npm start
Visit http://localhost:3000
Configuration
Depending on your type of installation set the configuration for the backend server in this file:
- Docker Compose: docker-compose.yaml
- Kubernetes: helm-chart/values.yaml
- standalone: backend/settings.yaml
All configuration options can be written as environment variables using underscore notation and the prefix SE
, e.g. SE_K8S__deployment__cluster=false
.
The following configuration options are available:
General
graph_update_every
Update the graph every X seconds (integer, required, default:300
)graph_layout_arguments
Arguments passed to graphviz layout (string, required, default:-Grankdir=LR -Gnodesep=0.8 -Gpad=10
)
Kafka Connect
kafkaconnect.url
URL of Kafka Connect server (string, required, default:http://localhost:8083
)kafkaconnect.displayed_information
Configuration options of Kafka connectors displayed in the frontend (list of dict, required, default:[{'name': 'Transformer', 'key': 'transforms.changeTopic.regex'}]
)
Kubernetes
k8s.deployment.cluster
Whether streams-explorer is deployed to Kubernetes cluster (bool, required, default:false
)k8s.deployment.context
Name of cluster (string, optional if running in cluster, default:kubernetes-cluster
)k8s.deployment.namespaces
Kubernetes namespaces (list of string, required, default:['kubernetes-namespace']
)k8s.containers.ignore
Name of containers that should be ignored/hidden (list of string, default:['prometheus-jmx-exporter']
)k8s.displayed_information
Details of pod that should be displayed (list of dict, default:[{'name': 'Labels', 'key': 'metadata.labels'}]
)k8s.labels
Labels used to set attributes of nodes (list of string, required, default:['pipeline']
)k8s.independent_graph.label
Attribute of nodes the pipeline name should be extracted from (string, required, default:pipeline
)k8s.consumer_group_annotation
Annotation the consumer group name should be extracted from (string, required, default:consumerGroup
)
Schema Registry
schemaregistry.url
URL of Schema Registry (string, required, default:http://localhost:8081
)
Prometheus
prometheus.url
URL of Prometheus (string, required, default:http://localhost:9090
)
The following exporters are required to collect Kafka metrics for Prometheus:
AKHQ
akhq.url
URL of AKHQ (string, default:http://localhost:8080
)akhq.cluster
Name of cluster (string, default:kubernetes-cluster
)
Grafana
grafana.url
URL of Grafana (string, default:http://localhost:3000
)grafana.dashboards.topics
Path to topics dashboard (string), sample dashboards for topics and consumer groups are included in the./grafana
subfoldergrafana.dashboards.consumergroups
Path to consumer groups dashboard (string)
Kibana
kibanalogs.url
URL of Kibana logs (string, default:http://localhost:5601
)
Elasticsearch
for Kafka Connect Elasticsearch connector
esindex.url
URL of Elasticsearch index (string, default:http://localhost:5601/app/kibana#/dev_tools/console
)
Plugins
plugins.path
Path to folder containing plugins relative to backend (string, required, default:./plugins
)plugins.extractors.default
Whether to load default extractors (bool, required, default:true
)
Demo pipeline
ATM Fraud detection with streams-bootstrap
Plugin customization
It is possible to create your own linker, metric provider, and extractors in Python by implementing the LinkingService
, MetricProvider
, or Extractor
classes. This way you can customize it to your specific setup and services. As an example we provide the DefaultLinker
as LinkingService
. The default MetricProvider
supports Prometheus. Furthermore the following default Extractor
plugins are included:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for streams_explorer-1.1.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 46c2c921161995b0f17e1f3564cc966936b7b1f03cb25f2afe6500d7ba25d187 |
|
MD5 | 3ff91b30b0c8a584ab097206ee75dfa4 |
|
BLAKE2b-256 | a1f2be5df56da6e12d6e01fd81ab0b28ec944d1dce2172335bdcc9f358126b31 |