Skip to main content

Auto Time Complexity Estimator — empirically measure Big-O with one line

Project description

bigo-time — Auto Time Complexity Estimator

PyPI version Python 3.8+ License: MIT Zero dependencies

One function call. Instant Big-O estimate. No static analysis, no AST tricks — pure empirical measurement.

from bigo_time import analyze

analyze(my_function)
────────────────────────────────────────────────────
  bigo analysis  ·  my_function()
────────────────────────────────────────────────────
  🟡  Estimated complexity : O(n log n)  (Linearithmic)
     R² fit score         : 0.9994  [high confidence]
     Data points used     : 14

           n      avg time   inner loops
  ──────────  ────────────  ────────────
          10    0.0014 ms        10,000
          16    0.0023 ms        10,000
          ...
      32,768    1.4821 ms             1
────────────────────────────────────────────────────

How it works

Instead of analysing source code (which is hard and often wrong), bigo-time:

  1. Generates inputs of geometrically-increasing sizes (n ≈ 10 → 50 000)
  2. Measures runtime for each size — intelligently averaging out noise
  3. Fits known curves (O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ)) via least-squares
  4. Returns the best fit with an R² confidence score

Problems solved

Problem Solution
CPU noise & variance Average 7 independent runs; trim min/max outliers
Small-n bias 15 log-spaced sizes up to ~50 000
Functions too fast to time Auto-repeat in inner loop; divide elapsed by count
Exponential blowup (2ⁿ) Per-run and total wall-clock budget; early stop

Installation

pip install bigo-time

No dependencies — pure Python stdlib.


Usage

Basic

from bigo_time import analyze

def my_sort(lst):
    return sorted(lst)

result = analyze(my_sort)

Custom input generator

# Default: list of n ints. Override for your data shape:
analyze(my_sort, input_generator=lambda n: list(range(n, 0, -1)))

# Dict-based function
def count_values(d: dict):
    return len(set(d.values()))

analyze(count_values, input_generator=lambda n: {i: i % 100 for i in range(n)})

Custom sizes

# Probe specific sizes
analyze(my_sort, sizes=[100, 500, 1000, 5000, 10000])

Programmatic access

result = analyze(my_sort, print_result=False)

print(result.complexity.notation)   # "O(n log n)"
print(result.complexity.name)       # "Linearithmic"
print(result.r_squared)             # 0.9994
print(result.confidence)            # "high"
print(result.input_sizes)           # [10, 16, 25, ...]
print(result.runtimes)              # [1.4e-6, 2.3e-6, ...]

Verbose mode

analyze(my_sort, verbose=True)
# → n=      10  inner=10,000  avg=  0.0014 ms  stdev=0.0001 ms
# → n=      16  inner=10,000  avg=  0.0023 ms  stdev=0.0002 ms
# ...

API reference

analyze(func, *, ...)

Parameter Type Default Description
func Callable Function to analyse
input_generator Callable[[int], Any] auto-detected f(n) → input of size n
sizes List[int] log-spaced 10→50k Input sizes to probe
repeat int 7 Outer timing repeats per size
time_budget float 3.0 Per-run abort threshold (seconds)
total_budget float 30.0 Total wall-clock limit (seconds)
verbose bool False Print live timing lines
print_result bool True Print summary table

Returns AnalysisResult.

AnalysisResult

Attribute Type Description
.func_name str Name of analysed function
.complexity ComplexityClass Best-fit complexity class
.complexity.notation str e.g. "O(n log n)"
.complexity.name str e.g. "Linearithmic"
.r_squared float Goodness-of-fit (0–1)
.confidence str "high" / "medium" / "low"
.input_sizes List[int] n values measured
.runtimes List[float] Per-call averages (seconds)

Caveats

  • Empirical estimation is probabilistic, not exact. Noise, JIT caching, and branch prediction can affect results.
  • O(1) and O(log n) are hard to distinguish — both run very fast.
  • Results reflect average-case behaviour on the default input shape. Adversarial inputs may show different complexity.
  • For O(2ⁿ) functions the early-stop kicks in — you'll get a partial result but the trend is unmistakable.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bigo_time-0.1.2.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bigo_time-0.1.2-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file bigo_time-0.1.2.tar.gz.

File metadata

  • Download URL: bigo_time-0.1.2.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for bigo_time-0.1.2.tar.gz
Algorithm Hash digest
SHA256 0f3cef467239d5fd51e0315b92077bc45d92752a8602d9968a2873ebe78b5fdc
MD5 8458dcde256221b25f659b3c8a0af98d
BLAKE2b-256 cbca6f13716be73c0071a1425d74f90a6b27d27b867b82a322444a2657e5c113

See more details on using hashes here.

File details

Details for the file bigo_time-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: bigo_time-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 10.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for bigo_time-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ba9bb7415cce65c2267b54cb140252682e8273e0d13feffab542eed35bc1f17e
MD5 855ae458b2bf5042c5fedae289142428
BLAKE2b-256 b6ca2d239ff888c8665c7e8581009b069a15ea7b8c29e557b09f98c451b9f72f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page