Skip to main content

Auto Time Complexity Estimator — empirically measure Big-O with one line

Project description

bigo — Auto Time Complexity Estimator

PyPI version Python 3.8+ License: MIT Zero dependencies

One function call. Instant Big-O estimate. No static analysis, no AST tricks — pure empirical measurement.

from bigo import analyze

analyze(my_function)
────────────────────────────────────────────────────
  bigo analysis  ·  my_function()
────────────────────────────────────────────────────
  🟡  Estimated complexity : O(n log n)  (Linearithmic)
     R² fit score         : 0.9994  [high confidence]
     Data points used     : 14

           n      avg time   inner loops
  ──────────  ────────────  ────────────
          10    0.0014 ms        10,000
          16    0.0023 ms        10,000
          ...
      32,768    1.4821 ms             1
────────────────────────────────────────────────────

How it works

Instead of analysing source code (which is hard and often wrong), bigo:

  1. Generates inputs of geometrically-increasing sizes (n ≈ 10 → 50 000)
  2. Measures runtime for each size — intelligently averaging out noise
  3. Fits known curves (O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ)) via least-squares
  4. Returns the best fit with an R² confidence score

Problems solved

Problem Solution
CPU noise & variance Average 7 independent runs; trim min/max outliers
Small-n bias 15 log-spaced sizes up to ~50 000
Functions too fast to time Auto-repeat in inner loop; divide elapsed by count
Exponential blowup (2ⁿ) Per-run and total wall-clock budget; early stop

Installation

pip install bigo

No dependencies — pure Python stdlib.


Usage

Basic

from bigo import analyze

def my_sort(lst):
    return sorted(lst)

result = analyze(my_sort)

Custom input generator

# Default: list of n ints. Override for your data shape:
analyze(my_sort, input_generator=lambda n: list(range(n, 0, -1)))

# Dict-based function
def count_values(d: dict):
    return len(set(d.values()))

analyze(count_values, input_generator=lambda n: {i: i % 100 for i in range(n)})

Custom sizes

# Probe specific sizes
analyze(my_sort, sizes=[100, 500, 1000, 5000, 10000])

Programmatic access

result = analyze(my_sort, print_result=False)

print(result.complexity.notation)   # "O(n log n)"
print(result.complexity.name)       # "Linearithmic"
print(result.r_squared)             # 0.9994
print(result.confidence)            # "high"
print(result.input_sizes)           # [10, 16, 25, ...]
print(result.runtimes)              # [1.4e-6, 2.3e-6, ...]

Verbose mode

analyze(my_sort, verbose=True)
# → n=      10  inner=10,000  avg=  0.0014 ms  stdev=0.0001 ms
# → n=      16  inner=10,000  avg=  0.0023 ms  stdev=0.0002 ms
# ...

API reference

analyze(func, *, ...)

Parameter Type Default Description
func Callable Function to analyse
input_generator Callable[[int], Any] auto-detected f(n) → input of size n
sizes List[int] log-spaced 10→50k Input sizes to probe
repeat int 7 Outer timing repeats per size
time_budget float 3.0 Per-run abort threshold (seconds)
total_budget float 30.0 Total wall-clock limit (seconds)
verbose bool False Print live timing lines
print_result bool True Print summary table

Returns AnalysisResult.

AnalysisResult

Attribute Type Description
.func_name str Name of analysed function
.complexity ComplexityClass Best-fit complexity class
.complexity.notation str e.g. "O(n log n)"
.complexity.name str e.g. "Linearithmic"
.r_squared float Goodness-of-fit (0–1)
.confidence str "high" / "medium" / "low"
.input_sizes List[int] n values measured
.runtimes List[float] Per-call averages (seconds)

Caveats

  • Empirical estimation is probabilistic, not exact. Noise, JIT caching, and branch prediction can affect results.
  • O(1) and O(log n) are hard to distinguish — both run very fast.
  • Results reflect average-case behaviour on the default input shape. Adversarial inputs may show different complexity.
  • For O(2ⁿ) functions the early-stop kicks in — you'll get a partial result but the trend is unmistakable.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bigo_time-0.1.1.tar.gz (4.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bigo_time-0.1.1-py3-none-any.whl (3.6 kB view details)

Uploaded Python 3

File details

Details for the file bigo_time-0.1.1.tar.gz.

File metadata

  • Download URL: bigo_time-0.1.1.tar.gz
  • Upload date:
  • Size: 4.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for bigo_time-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e2c585d47843385be7bf6978062beea65fd63ba9f0551c86bb9bb2ddbb141cce
MD5 fe319de03abd3bb5729a572954b4be5a
BLAKE2b-256 ca8d83a1690dc74452210a801c2c77f2c15dbd1a19f35e6295a667373e27130a

See more details on using hashes here.

File details

Details for the file bigo_time-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: bigo_time-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 3.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for bigo_time-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 72ac60572f7809e67d1bd04ab9538517d07ec37fbe68649ae47e1c583aed7a2f
MD5 7333defd0f50a67d25a74fefcf53b123
BLAKE2b-256 374115c59d902b4efa7e35339622dfefef09a69375a870770239ab02d3cfd7a2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page