Skip to main content

An unofficial Python library for easy interaction with the Humio API

Project description

Humio API (unofficial lib)

This project requires Python>=3.6.1

This is an unofficial library for interacting with Humio's API. If you're looking for the official Python Humio library it can be found here: humiolib. This library mostly exists because the official library was very basic back in 2019 when I first needed this.

Installation

pip install humioapi

Main features

  • Untested and poorly documented code
  • CLI companion tool available at humiocli.
  • Asyncronous and syncronous streaming queries supported by httpx.
  • QueryJobs which can be polled once, or until completed.
  • Chainable relative time modifiers (similar to Splunk e.g. -1d@h-30m).
  • List repository details (NOTE: normal Humio users cannot see repos without read permission).
  • Easy env-variable based configuration.
  • Ingest data to Humio, although you probably want to use Filebeat for anything other than one-off things to your sandbox.
  • Create and update parsers.

Usage

For convenience your Humio URL and token should be set in the environment variables HUMIO_BASE_URL and HUMIO_TOKEN. These can be set in ~/.config/humio/.env and loaded by humioapi.loadenv().

Query repositories

Create an instance of HumioAPI to get started

import humioapi
import logging
humioapi.initialize_logging(level=logging.INFO, fmt="human")

api = humioapi.HumioAPI(**humioapi.loadenv())
repositories = api.repositories()

Iterate over syncronous streaming searches sequentially

import humioapi
import logging
humioapi.initialize_logging(level=logging.INFO, fmt="human")

api = humioapi.HumioAPI(**humioapi.loadenv())
stream = api.streaming_search(
    query="log_type=trace user=someone",
    repos=['frontend', 'backend', 'integration'],
    start="-1week@day",
    stop="now"
)
for event in stream:
    print(event)

Itreate over asyncronous streaming searches in parallell, from a syncronous context

import asyncio
import humioapi
import logging

humioapi.initialize_logging(level=logging.INFO, fmt="human")
api = humioapi.HumioAPI(**humioapi.loadenv())

queries = [{
    "query": "chad index.html | select(@timestamp)",
    "repo": "sandbox",
    "start": "-7d@d",
    "stop": "-4d@d",
    }, {
    "query": "chad index.html | select(@rawstring)",
    "repo": "sandbox",
    "start": "-4d@d",
    "stop": "now",
}]

loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)

try:
    tasks = api.async_streaming_search(queries, loop=loop, concurrent_limit=10)
    for item in humioapi.consume_async(tasks, loop):
        print(item)
finally:
    loop.close()
    asyncio.set_event_loop(None)

Jupyter Notebook

pew new --python=python36 humioapi
# run the following commands inside the virtualenv
pip install git+https://github.com/gwtwod/humioapi.git
pip install ipykernel seaborn matplotlib
python -m ipykernel install --user --name 'python36-humioapi' --display-name 'Python 3.6 (venv humioapi)'

Start the notebook by running jupyter-notebook and choose the newly created kernel when creating a new notebook.

Run this code to get started:

import humioapi
import logging
humioapi.initialize_logging(level=logging.INFO, fmt="human")

api = humioapi.HumioAPI(**humioapi.loadenv())
results = api.streaming_search(query='log_type=trace user=someone', repos=['frontend', 'backend'], start="@d", stop="now")
for i in results:
    print(i)

To get a list of all readable repositories with names starting with 'frontend':

repos = sorted([k for k,v in api.repositories().items() if v['read_permission'] and k.startswith('frontend')])

Making a timechart (lineplot):

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd

sns.set(color_codes=True)
sns.set_style('darkgrid')

results = api.streaming_search(query='log_type=stats | timechart(series=metric)', repos=['frontend'], start=start, stop=stop)
df = pd.DataFrame(results)
df['_count'] = df['_count'].astype(float)

df['_bucket'] = pd.to_datetime(df['_bucket'], unit='ms', origin='unix', utc=True)
df.set_index('_bucket', inplace=True)

df.index = df.index.tz_convert('Europe/Oslo')
df = df.pivot(columns='metric', values='_count')

sns.lineplot(data=df)

SSL and proxies

All HTTP traffic is done through httpx, which allows customizing SSL and proxy behaviour through environment variables. See httpx docs for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

humioapi-0.6.2.tar.gz (20.8 kB view details)

Uploaded Source

Built Distribution

humioapi-0.6.2-py3-none-any.whl (21.3 kB view details)

Uploaded Python 3

File details

Details for the file humioapi-0.6.2.tar.gz.

File metadata

  • Download URL: humioapi-0.6.2.tar.gz
  • Upload date:
  • Size: 20.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.5 Linux/5.4.0-56-generic

File hashes

Hashes for humioapi-0.6.2.tar.gz
Algorithm Hash digest
SHA256 708c54e76455a773285081d56f2b2460be65dd25feff0ab76eb2df965440694a
MD5 5d21393afa4fc76e7abcdf286c82bec6
BLAKE2b-256 13129144d355af613983422adf14b7194c1967dbbb35d073a9c0d019398dd674

See more details on using hashes here.

File details

Details for the file humioapi-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: humioapi-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 21.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.5 Linux/5.4.0-56-generic

File hashes

Hashes for humioapi-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fc0c0242566567be6503a4b28fc8407a21e98a2b581b08908ff90c5210ae98ab
MD5 c557cea3e783b61cb2fb14f43cd4d20a
BLAKE2b-256 7125f33f901ab150fb59d6734b769a549651d6639e1e9aa4d4c22c62e32f00b8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page