No project description provided
Project description
PyCodeHash
Data pipelines are of paramount importance in data science, engineering and analysis. Often, there are parts of the pipeline that have not changed. Recomputing these nodes is wasteful, especially for larger datasets. PyCodeHash is a generic data and code hashing library that facilitates downstream caching.
🚩 Output of hash_func
for both functions below is identical: 38d6e9f262ab77f6536e14c74be687ce2cb44cdebb7045e5b2f51946215cf4d0
! 🚩
Read more on the documentation site.
def func(data, key_col, value_col, **kwargs):
if not isinstance(key_col, str) or not isinstance(value_col, str):
raise TypeError(
f"Column names must be strings, got {key_col}:{type(key_col)} and {value_col}:{type(value_col)}"
)
reserved_col = "index"
if reserved_col in (key_col, value_col):
raise ValueError(f"Reserved keyword: `{reserved_col}`")
data = data[~data.isnull().any(axis=1)].copy()
data[key_col] = data[key_col].astype(int)
column_names = [key_col, value_col]
for column_name in column_names:
print(f"Unique values in {column_name}", list(data[column_name].unique()))
return dict(zip(data[key_col], data[value_col]))
Sample 1: An implementation of a function that creates a mapping from two columns in a pandas DataFrame. Hash: 38d6e9f262ab77f6536e14c74be687ce2cb44cdebb7045e5b2f51946215cf4d0
from __future__ import annotations
import logging # on purpose unused import
import pandas as pd
def create_df_mapping(data: pd.DataFrame, key_col: str, value_col: str, **kwargs) -> dict[int, str]:
"""Example function to demonstrate PyCodeHash.
This function takes a pandas DataFrame and two column names, and turns them into a dictionary.
Args:
data: DataFrame containing the data
key_col: column
"""
legacy_variable = None
if not isinstance(key_col, str) or not isinstance(value_col, str):
raise TypeError(
"Column names must be strings, got {key_col}:{key_type} and {value_col}:{value_type}".format(
key_col=key_col,
key_type=type(key_col),
value_col=value_col,
value_type=type(value_col),
)
)
else:
reserved_col = str("index")
if key_col == reserved_col:
raise ValueError("Reserved keyword: `{}`".format(reserved_col))
elif value_col == reserved_col:
raise ValueError("Reserved keyword: `{}`".format(reserved_col))
data = data[~data.isnull().any(axis=1)].copy()
data[key_col] = data[key_col].astype(int)
column_names = [key_col, value_col]
for index, column_name in enumerate(column_names):
print(f"Unique values in {column_names[index]}", list(data[column_names[index]].unique()))
return {
key: value
for key, value in zip(data[key_col], data[value_col])
}
Sample 2: An alternative implementation of the snippet above. Hash: 38d6e9f262ab77f6536e14c74be687ce2cb44cdebb7045e5b2f51946215cf4d0
Detecting changes in data pipelines
The canonical way to check if two things are equal is to compare their hashes. Learn more on how PyCodeHash detects changes in:
- Python Functions
- SQL Queries
- Datasets: Files, Directories, S3, Hive
- Python dependencies
Installation
PyCodeHash is available from PyPI:
pip install pycodehash
Examples
Python
Use the FunctionHasher to obtain the hash of a Python function object:
from pycodehash import FunctionHasher
from tliba import compute_moments
from tliba.etl import add_bernoulli_samples, combine_random_samples
fh = FunctionHasher()
# Hash the function `add_bernoulli_samples`
h1 = fh.hash_func(add_bernoulli_samples)
print("Hash for `add_bernoulli_samples`", h1)
# Hash the function `compute_moments`
h2 = fh.hash_func(compute_moments)
print("Hash for `compute_moments`", h2)
# Hash the function `combine_random_samples`
h3 = fh.hash_func(combine_random_samples)
print("Hash for `combine_random_samples`", h3)
SQL
Hash SQL queries and files using the SQLHasher
(requires pip install pycodehash[sql]
):
from pathlib import Path
from pycodehash.sql.sql_hasher import SQLHasher
# First query
query_1 = "SELECT * FROM db.templates"
# The second query is equivalent, but has additional whitespace
query_2 = "SELECT\n * \nFROM \n db.templates"
# Write the second query to a file
query_2_file = Path("/tmp/query.sql")
query_2_file.write_text(query_2)
# Create the SQLHasher object for SparkSQL
hasher = SQLHasher(dialect="sparksql")
# We can hash a string
print(hasher.hash_query(query_1))
# Or pass a path
print(hasher.hash_file(query_2_file))
Datasets
Hash data, such as files, directories, database tables:
from pathlib import Path
from pycodehash.datasets import LocalFileHash, LocalDirectoryHash
# Hash a single file
fh = LocalFileHash()
print(fh.collect_metadata("example.py"))
# {'last_modified': datetime.datetime(2023, 11, 24, 23, 38, 17, 524024), 'size': 543}
print(fh.compute_hash("example.py"))
# 6189721d3ecdf86503a82c07eed82743069ebbf39e974f33ca684809e67e9e0e
# Hash a directory
dh = LocalDirectoryHash()
# Recursively hash all files in a directory
print(len(dh.collect_partitions(Path(__file__).parent / "src")))
# 29
Python Package Dependencies
Hash a user-provided list of Python packages your code depends on. This may be a selection of the total list of dependencies. For example the most important libraries your code depends on, that you want to track in order to trigger a rerun of your pipeline in case of version changes. The hasher retrieves the installed package versions and creates a hash of those. We emphasize it is up to the user to provide the list of relevant dependencies.
from pycodehash.dependency import PythonDependencyHash
# hash a list of dependencies
hasher = PythonDependencyHash()
print(hasher.collect_metadata(dependencies=["pycodehash", "rope"], add_python_version=True))
# hasher retrieves the installed package versions found
# {'pycodehash': '0.2.0', 'rope': '1.11.0', 'Python': '3.11'}
print(hasher.compute_hash(dependencies=["pycodehash", "rope"], add_python_version=True))
# cecb8036ad61235c2577db9943f519b824f7a25e449da9cd332bc600fb5dccf0
License
PyCodeHash is completely free, open-source and licensed under the MIT license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pycodehash-0.7.0.tar.gz
.
File metadata
- Download URL: pycodehash-0.7.0.tar.gz
- Upload date:
- Size: 48.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f385a76ea28fe3d99c1848333438890233b1a30e7741c8dae10691f18d57ac9a |
|
MD5 | d0aef3c83ac593c39a073cb90299b8c4 |
|
BLAKE2b-256 | e7050cca02de2776b4ef78f9935ebcc703b90f27cafee616c379e51858140e02 |
File details
Details for the file pycodehash-0.7.0-py3-none-any.whl
.
File metadata
- Download URL: pycodehash-0.7.0-py3-none-any.whl
- Upload date:
- Size: 34.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 322d30d4db96aa46d6f47eb9752d402fc2e6361fb427fa65eac0050ff4084498 |
|
MD5 | ae42e9160f36f2cfddcd84e906aa05bf |
|
BLAKE2b-256 | c4199188af8c4e7fd29ff006a71d06cd5f9520c4697aeeb18e4af8221b0db8ec |