Skip to main content

Library focusing on row-major organization of tabular data and control over the Excel application

Project description

For example usage, see:

https://github.com/michael-ross-ven/vengeance_example/blob/main/vengeance_example/flux_example.py

https://github.com/michael-ross-ven/vengeance_example/blob/main/vengeance_example/excel_example.py

Managing data stored as rows and columns shouldn't be complicated.

When given a list of lists in Python, your first instinct is to loop over rows and modify column values, in-place. It's the most natural way to think about the data, because conceptually, each row is some entity, and each column is a property of that row, much like a list of objects.

A headache when dealing with list of lists however, is having to keep track of columns by integer elements; it would be nice to replace the indices on each row with named attributes, and have these applied even when the columns are not known ahead of time, such as when pulling data from a sql table or csv file.

for row in matrix:
    row[17]            # what's in that 18th column again?

for row in matrix:
    row.customer_id    # oh, duh

Doesn't the pandas DataFrame already already solve this?

In a DataFrame, data is taken out of its native nested list format and is organized in column-major order, which comes with some advantages as well as drawbacks.

eg Row-Major Order:
    [['attribute_a', 'attribute_b', 'attribute_c'],
     ['a',           'b',           3.0],
     ['a',           'b',           3.0],
     ['a',           'b',           3.0]]
eg Column-Major Order:
    {'attribute_a': array(['a', 'a', 'a'], dtype='<U1'),
     'attribute_b': array(['b', 'b', 'b'], dtype='<U1'),
     'attribute_c': array([3.,   3.,  3.], dtype=float64)}

In column-major order, values in a single column are usually all of the same datatype, which means they can be packed into consecutive addresses in memory as an actual array and iterated extremely quickly. But this comes at a cost: re-organizing the data as it's intuitively understood by humans, where each row is some entity, and each column is a property of that row, is agonizingly slow. (DataFrame.iterrows() and DataFrame.apply() incur a huge performance penalty, and can be 1_000x times slower to iterate than Python's built-in list.)

DataFrames are also intended to make heavy use of 'vectorization', where operations can be broadcast and applied to an entire set of values in parallel, performed as SIMD instructions at the microprocessor level. But the restricted use of explicit loops over a DataFrame requires memorizing specialized methods for almost every operation and modification, which often makes the syntax convoluted.

This can lead to code that is counter-inituitive to write and effortful to read, especially when method-chaining is overused.

# wait, what exactly does this do again?
df['column'] = np.sign(df.column.diff().fillna(0)).shift(-1).fillna(0) \
                 .apply(lambda x: (x['column'].head(1),
                                   x.shape[0],
                                   x['start'].iloc[-1] - x['start'].iloc[0]))
(see also youtube: 'So You Wanna Be a Pandas Expert? - James Powell' for how bad this can really get)

DataFrame Advantages:
  • vectorized operations on contiguous arrays are memory-efficient and very fast
DataFrame Disadvantages:
  • syntax doesnt always drive intuition or conceptual understanding
  • iteration by rows is effectively out of the question
    (and makes working with JSON format notoriously difficult)
  • vectorized operations are harder to debug / inspect when they encounter an error
  • unexpected loss of precision on numerical data
But I mean, why are we working in Python to begin with?
  • extreme emphasis on code readability
  • datatypes are abstracted away
  • are less concerned about hyper-optimized execution times
    (but Numba, pyjion, and PyPy JIT compilers can make a big difference for almost no effort)
So does the DataFrame really reinforce what makes Python so great?

"Explicit is better than implicit"
"Sparse is better than dense"
"Readability counts"
"There should be one– and preferably only one –obvious way to do it"


vengeance.flux_cls

  • similar idea behind a pandas DataFrame, but is more closely aligned with Python's design philosophy
  • when you're willing to trade for a little bit of speed for a lot simplicity
  • a lightweight, pure-python wrapper class around list of lists
  • applies named attributes to rows; attribute values are mutable during iteration
  • provides convenience aggregate operations (sort, filter, groupby, etc)
  • excellent for extremely fast prototyping and data subjugation
Row-Major Iteration
# organized like csv data, attribute names are provided in first row
matrix = [['attribute_a', 'attribute_b', 'attribute_c'],
          ['a',           'b',           3.0],
          ['a',           'b',           3.0],
          ['a',           'b',           3.0]]
flux = vengeance.flux_cls(matrix)

# row attributes can be accessed by name or by sequential index
for row in flux:
    a = row.attribute_a
    a = row['attribute_a']
    a = row[-1]
    a = row.values[:-2]

    row.attribute_a    = None
    row['attribute_a'] = None
    row[-1]            = None
    row.values[:2]     = [None, None]

# transformations are compositional and self-documenting
for row in flux:
    row.hypotenuse = math.sqrt(row.side_a**2 +,
                               row.side_b**2)

matrix = list(flux.values())
Columns
column = flux['attribute_a']

flux.rename_columns({'attribute_a': 'renamed_a',
                     'attribute_b': 'renamed_b'})
flux.insert_columns((0, 'inserted_a'),
                    (2, 'inserted_b'))
flux.delete_columns('inserted_a',
                    'inserted_b')
Rows
rows = [['c', 'd', 4.0],
        ['c', 'd', 4.0],
        ['c', 'd', 4.0]]

flux.append_rows(rows)
flux.insert_rows(5, rows)

flux_c = flux_a + flux_b
Sort / Filter / Apply
flux.sort('attribute_c')
flux.filter(lambda row: row.attribute_b != 'c')
u = flux.unique('attribute_a', 'attribute_b')

# apply functions like you'd normally do in Python: with comprehensions
flux['attribute_new'] = [some_function(v) for v in flux['attribute_a']]
Groupby
matrix = [['year', 'month', 'random_float'],
          ['2000', '01',     random.uniform(0, 9)],
          ['2000', '02',     random.uniform(0, 9)],
          ['2001', '01',     random.uniform(0, 9)],
          ['2001', '01',     random.uniform(0, 9)],
          ['2001', '01',     random.uniform(0, 9)],
          ['2002', '01',     random.uniform(0, 9)]]
flux = vengeance.flux_cls(matrix)

dict_1   = flux.map_rows_append('year', 'month')
countifs = {k: len(rows) for k, rows in dict_1.items()}
sumifs   = {k: sum(row.random_float for row in rows)
                                    for k, rows in dict_1.items()}

dict_2 = flux.map_rows_nested('year', 'month')
rows_1 = dict_1[('2001', '01')]
rows_2 = dict_2['2001']['01']
Read / Write Files
flux.to_csv('file.csv')
flux = flux_cls.from_csv('file.csv')

flux.to_json('file.json')
flux = flux_cls.from_json('file.json')

vengeance.lev_cls

  • (description coming soon...)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vengeance-1.1.29.tar.gz (70.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vengeance-1.1.29-py3-none-any.whl (78.1 kB view details)

Uploaded Python 3

File details

Details for the file vengeance-1.1.29.tar.gz.

File metadata

  • Download URL: vengeance-1.1.29.tar.gz
  • Upload date:
  • Size: 70.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.5

File hashes

Hashes for vengeance-1.1.29.tar.gz
Algorithm Hash digest
SHA256 baf156f77a250704e51ac3d95e96c05c3c59de618bc965447898474e5947e9e4
MD5 17b47aed0564564e2ad7b04e00668328
BLAKE2b-256 012cecf10b0ee65d84aa1b575ce2a0d83c060fe1a7598fd9837c11cc32a191d2

See more details on using hashes here.

File details

Details for the file vengeance-1.1.29-py3-none-any.whl.

File metadata

  • Download URL: vengeance-1.1.29-py3-none-any.whl
  • Upload date:
  • Size: 78.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.5

File hashes

Hashes for vengeance-1.1.29-py3-none-any.whl
Algorithm Hash digest
SHA256 71f71b43c09847b251646a1bc5b2c57f2186eaaa4e0ed1ff326fef0df716a829
MD5 78dc29fea8da020da2cb7864a880c984
BLAKE2b-256 0820199dcd9882f417a1ea3737891e8b35c29c3c6a720663239cafbe2a03d08b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page