A Python package for Azure Datalake Storage adlsgen2 [abfss://] and Microsoft Fabric Lakehouse [abfss://], Google Cloud Storage [gs://bucket], AWS S3 bucket [s3://bucket] enables format detection and schema retrieval for Iceberg, Delta, and Parquet formats.It helps identify paritioned columns for parquet datasets. It also supports querying Delta, Parquet, and Iceberg formats through SQL constructs.
Project description
DataLakeSurfer
A unified toolkit for detecting, exploring, and validating data lake formats across Azure Data Lake Storage Gen2 , Microsoft Fabric Lakehouse, GCP Google Storage and AWS S3.
DataLakeSurfer lets you:
- Detect whether a directory is Iceberg, Delta, or Parquet.
- Retrieve normalized schema for supported formats.
- Detect partition columns for Parquet datasets.
- Query Delta Format Query Delta for through SQL.
- Validate configuration inputs with Pydantic models.
- Work seamlessly with Azure ADLS Gen2 and Fabric Lakehouse, AWS S3 and GCP GS.
📑 Table of Contents
- Installation
- Quick Start
- Core Concepts
- Usage Examples
- Supported Formats
- Development & Testing
- License
🚀 Installation
pip install datalakesurfer
Requirements:
- Python 3.9+
pyarrowfs-adlgen2==0.2.5adlfs==2024.12.0azure-identity==1.17.1azure-storage-file-datalake==12.21.0numpy==1.26.4pandas==1.5.3cryptography==43.0.1duckdb==1.3.2deltalake==1.1.4pydantic==2.11.7gcsfs==2025.7.0s3fs==2025.7.0
âš¡ Quick Start
Generate Token
from azure.identity import DefaultAzureCredential, ClientSecretCredential
from azure.core.credentials import AccessToken
class CustomTokenCredential():
def __init__(self):
self.credential = DefaultAzureCredential()
def get_token(self):
token_response = self.credential.get_token("https://storage.azure.com/.default")
return (token_response.token, token_response.expires_on)
class CustomTokenCredentialSeconday(object):
def __init__(self,token,expires_on):
self.token = token
self.expires_on = expires_on
def get_token(self, *scopes, **kwargs):
return AccessToken(self.token, self.expires_on)
token,expires_on = CustomTokenCredential().get_token()
print(token)
print(expires_on)
from datalakesurfer.format_detector import FormatDetector
f = FormatDetector(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="salesorder",
token=token,
expires_on=expires_on).detect_format()
print(f)
# → {'status': 'success', 'format': 'delta'}
🧠Core Concepts
- Format Detectors: Identify dataset format (
iceberg,delta,parquet). - Schema Retrievers: Extract a normalized schema regardless of source format.
- Partition Detectors: Identify partitioning and list partition columns.
- Models: All incoming parameters validated by Pydantic for correctness.
📚 Usage Examples
1. Detect Format (ADLS)
from datalakesurfer.format_detector import FormatDetector
f = FormatDetector(account_name="adlssynapseeus01",
file_system_name="iceberg-container",
directory_path="customerdw/mydb/product",
token=token,
expires_on=expires_on).detect_format()
print(f)
# {'status': 'success', 'format': 'iceberg'}
from datalakesurfer.format_detector import FormatDetector
f = FormatDetector(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="SaleParquetData",
token=token,
expires_on=expires_on).detect_format()
print(f)
# {'status': 'success', 'format': 'parquet'}
2. Detect Format (Fabric)
from datalakesurfer.fabric_format_detector import FabricFormatDetector
f = FabricFormatDetector(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/salesorder",
token=token,
expires_on=expires_on).detect_format()
print(f)
#abfss://devworkspace@onelake.dfs.fabric.microsoft.com/devlakehouse.Lakehouse/Files/salesorder
# {'status': 'success', 'format': 'delta'}
3. Detect Format (GCS)
import json
with open("/Workspace/Users/<>/fresh-replica-393006-0398d554d3fa.json", "r") as f:
service_account_dict = json.load(f)
from datalakesurfer.gcs_format_detector import GCSFormatDetector
f = GCSFormatDetector(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="Country").detect_format()
print(f)
# {'status': 'success', 'format': 'delta'}
f = GCSFormatDetector(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="ParquetDataSource").detect_format()
print(f)
# {'status': 'success', 'format': 'parquet'}
f = GCSFormatDetector(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="customerdw/mydb/product").detect_format()
print(f)
# {'status': 'success', 'format': 'iceberg'}
4. Retrieve Schema
For Parquet/Delta/Iceberg in ADLS:
from datalakesurfer.schemas.delta_schema import DeltaSchemaRetriever
s = DeltaSchemaRetriever(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="salesorder",
token=token,
expires_on=expires_on).get_schema()
print(s)
#{'status': 'success', 'schema': [{'column_name': 'SalesId', 'dtype': 'string'}, {'column_name': 'ProductName', 'dtype': 'string'}, {'column_name': 'SalesDateTime', 'dtype': 'timestamp'}, {'column_name': 'SalesAmount', 'dtype': 'int'}, {'column_name': 'EventProcessingTime', 'dtype': 'timestamp'}]}
from datalakesurfer.schemas.iceberg_schema import IcebergSchemaRetriever
s = IcebergSchemaRetriever(account_name="adlssynapseeus01",
file_system_name="iceberg-container",
directory_path="customerdw/mydb/product",
token=token,
expires_on=expires_on).get_schema()
print(s)
#{'status': 'success', 'schema': [{'column_name': 'productId', 'dtype': 'string'}, {'column_name': 'productName', 'dtype': 'string'}, {'column_name': 'productDescription', 'dtype': 'string'}, {'column_name': 'productPrice', 'dtype': 'double'}, {'column_name': 'dataModified', 'dtype': 'timestamp'}]}
from datalakesurfer.schemas.parquet_schema import ParquetSchemaRetriever
s = ParquetSchemaRetriever(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="SaleParquetData",
token=token,
expires_on=expires_on).get_schema()
print(s)
#{'status': 'success', 'schema': [{'column_name': 'SalesId', 'dtype': 'string'}, {'column_name': 'ProductName', 'dtype': 'string'}, {'column_name': 'SalesDateTime', 'dtype': 'timestamp'}, {'column_name': 'SalesAmount', 'dtype': 'int'}, {'column_name': 'EventProcessingTime', 'dtype': 'timestamp'}]}
4. Retrieve Schema (GCS)
For Delta/Iceberg/Parquet in GCS:
from datalakesurfer.schemas.gcs_delta_schema import GCSDeltaSchemaRetriever
s = GCSDeltaSchemaRetriever(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="Country").get_schema()
print(s)
# [{'column_name': 'SalesId', 'dtype': 'string'}, ...]
from datalakesurfer.schemas.gcs_iceberg_schema import GCSIcebergSchemaRetriever
s = GCSIcebergSchemaRetriever(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="customerdw/mydb/product").get_schema()
print(s)
# [{'column_name': 'productId', 'dtype': 'string'}, ...]
from datalakesurfer.schemas.gcs_parquet_schema import GCSParquetSchemaRetriever
s = GCSParquetSchemaRetriever(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="ParquetDataSource").get_schema()
print(s)
# [{'column_name': 'SalesId', 'dtype': 'string'}, ...]
5. Detect Partitions
ADLS:
from datalakesurfer.schemas.parquet_schema import ParquetSchemaRetriever
f = ParquetSchemaRetriever(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="LargeSaleParquetData2Billion",
token=token,
expires_on=expires_on).detect_partitions()
print(f)
#{'status': 'success', 'isPartitioned': True, 'partition_columns': [{'column_name': 'ProductName', 'dtype': 'string'}, {'column_name': 'SalesAmount', 'dtype': 'int32'}]}
from datalakesurfer.schemas.parquet_schema import ParquetSchemaRetriever
f = ParquetSchemaRetriever(account_name="adlssynapseeus01",
file_system_name="datacontainer",
directory_path="SaleParquetData",
token=token,
expires_on=expires_on).detect_partitions()
print(f)
#{'status': 'success', 'isPartitioned': False, 'partition_columns': []}
Fabric:
from datalakesurfer.fabric_format_detector import FabricFormatDetector
f = FabricFormatDetector(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/Product",
token=token,
expires_on=expires_on).detect_format()
print(f)
#{'status': 'success', 'format': 'parquet'}
from datalakesurfer.fabric_format_detector import FabricFormatDetector
f = FabricFormatDetector(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/FlightTripData",
token=token,
expires_on=expires_on).detect_format()
print(f)
#{'status': 'success', 'format': 'iceberg'}
from datalakesurfer.schemas.fabric_delta_schema import FabricDeltaSchemaRetriever
s = FabricDeltaSchemaRetriever(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/salesorder",
token=token,
expires_on=expires_on).get_schema()
print(s)
#{'status': 'success', 'schema': [{'column_name': 'SalesId', 'dtype': 'string'}, {'column_name': 'ProductName', 'dtype': 'string'}, {'column_name': 'SalesDateTime', 'dtype': 'timestamp'}, {'column_name': 'SalesAmount', 'dtype': 'int'}, {'column_name': 'EventProcessingTime', 'dtype': 'timestamp'}]}
from datalakesurfer.schemas.fabric_iceberg_schema import FabricIcebergSchemaRetriever
s = FabricIcebergSchemaRetriever(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/FlightTripData",
token=token,
expires_on=expires_on).get_schema()
print(s)
#{'status': 'success', 'schema': [{'column_name': 'flight_id', 'dtype': 'int'}, {'column_name': 'airline_code', 'dtype': 'int'}, {'column_name': 'passenger_count', 'dtype': 'int'}, {'column_name': 'flight_duration_minutes', 'dtype': 'bigint'}, {'column_name': 'baggage_weight', 'dtype': 'float'}, {'column_name': 'ticket_price', 'dtype': 'double'}, {'column_name': 'tax_amount', 'dtype': 'decimal(38,18)', 'typeproperties': {'precision': 10, 'scale': 2}}, {'column_name': 'destination', 'dtype': 'string'}, {'column_name': 'flight_data', 'dtype': 'string'}, {'column_name': 'is_international', 'dtype': 'boolean'}, {'column_name': 'departure_timestamp', 'dtype': 'timestamp'}, {'column_name': 'arrival_timestamp_ntz', 'dtype': 'timestamp'}, {'column_name': 'flight_date', 'dtype': 'date'}]}
from datalakesurfer.schemas.fabric_parquet_schema import FabricParquetSchemaRetriever
s = FabricParquetSchemaRetriever(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/Product",
token=token,
expires_on=expires_on).get_schema()
print(s)
# {'status': 'success', 'schema': [{'column_name': 'name', 'dtype': 'string'}, {'column_name': 'age', 'dtype': 'long'}]}
from datalakesurfer.schemas.fabric_parquet_schema import FabricParquetSchemaRetriever
s = FabricParquetSchemaRetriever(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/Product",
token=token,
expires_on=expires_on).detect_partitions()
print(s)
#{'status': 'success', 'isPartitioned': False, 'partition_columns': []}
from datalakesurfer.schemas.fabric_parquet_schema import FabricParquetSchemaRetriever
f = FabricParquetSchemaRetriever(account_name="onelake",
file_system_name="devworkspace",
directory_path="devlakehouse.Lakehouse/Files/Customer",
token=token,
expires_on=expires_on).detect_partitions()
print(f)
#{'status': 'success', 'isPartitioned': True, 'partition_columns': [{'column_name': 'city', 'dtype': 'string'}]}
5. Detect Partitions (GCS)
from datalakesurfer.schemas.gcs_parquet_schema import GCSParquetSchemaRetriever
f = GCSParquetSchemaRetriever(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="ParquetDataSource").detect_partitions()
print(f)
# {'status': 'success', 'isPartitioned': True, 'partition_columns': [{'column_name': 'ProductName', 'dtype': 'string'}, ...]}
f = GCSParquetSchemaRetriever(service_account_info=service_account_dict,
file_system_name="storagebucket0001",
directory_path="ParquetDataSource").detect_partitions()
print(f)
# {'status': 'success', 'isPartitioned': False, 'partition_columns': []}
6. AWS S3 Usage
Set your AWS credentials:
AWS_ACCESS_KEY_ID = "<YOUR_AWS_ACCESS_KEY_ID>"
AWS_SECRET_ACCESS_KEY = "<YOUR_AWS_SECRET_ACCESS_KEY>"
AWS_REGION = "us-east-1"
Detect Format (S3)
from datalakesurfer.s3_format_detector import S3FormatDetector
f = S3FormatDetector(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="Country").detect_format()
print(f)
# {'status': 'success', 'format': 'delta'}
f = S3FormatDetector(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="ParquetDataSource").detect_format()
print(f)
# {'status': 'success', 'format': 'parquet'}
f = S3FormatDetector(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="customerdw/mydb/product").detect_format()
print(f)
# {'status': 'success', 'format': 'iceberg'}
Retrieve Schema (S3)
from datalakesurfer.schemas.s3_delta_schema import S3DeltaSchemaRetriever
s = S3DeltaSchemaRetriever(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="Country").get_schema()
print(s)
# [{'column_name': 'SalesId', 'dtype': 'string'}, ...]
from datalakesurfer.schemas.s3_iceberg_schema import S3IcebergSchemaRetriever
s = S3IcebergSchemaRetriever(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="customerdw/mydb/product").get_schema()
print(s)
# [{'column_name': 'productId', 'dtype': 'string'}, ...]
from datalakesurfer.schemas.s3_parquet_schema import S3ParquetSchemaRetriever
s = S3ParquetSchemaRetriever(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="ParquetDataSource").get_schema()
print(s)
# [{'column_name': 'SalesId', 'dtype': 'string'}, ...]
Detect Partitions (S3 Parquet)
from datalakesurfer.schemas.s3_parquet_schema import S3ParquetSchemaRetriever
f = S3ParquetSchemaRetriever(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="ParquetDataSource").detect_partitions()
print(f)
# {'status': 'success', 'isPartitioned': True, 'partition_columns': [{'column_name': 'ProductName', 'dtype': 'string'}, ...]}
f = S3ParquetSchemaRetriever(aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION,
file_system_name="devs3bucket001",
directory_path="ParquetDataSource").detect_partitions()
print(f)
# {'status': 'success', 'isPartitioned': False, 'partition_columns': []}
Query Delta Through SQL (GCS / S3 / ADLS Gen2 / Microsoft Fabric)
# GCS
from datalakesurfer.query.gcs_delta_query import GCSDeltaQueryRetriever
import json
tables = {
"sales": "storagebucket0001/SalesOrder",
"country": "storagebucket0001/Country"
}
query = "SELECT * FROM sales CROSS JOIN country"
retriever = GCSDeltaQueryRetriever(service_account_info=service_account_dict)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# S3
from datalakesurfer.query.s3_delta_query import S3DeltaQueryRetriever
import json
tables = {
"sales": "devs3bucket001/SalesOrder",
"country": "devs3bucket001/Country"
}
query = "SELECT * FROM sales CROSS JOIN country"
retriever = S3DeltaQueryRetriever(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# Microsoft Fabric
from datalakesurfer.query.fabric_delta_query import FabricDeltaQueryRetriever
tables = {
"sales": "devlakehouse.Lakehouse/Files/salesorder",
"salesorder": "devlakehouse.Lakehouse/Files/salesorder"
}
query = "SELECT * FROM sales UNION ALL SELECT * FROM salesorder"
retriever = FabricDeltaQueryRetriever(
account_name="onelake",
file_system_name="devworkspace",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# ADLS Gen2
from datalakesurfer.query.adls_delta_query import ADLSDeltaQueryRetriever
tables = {
"sales": "salesorder"
}
query = "SELECT * FROM sales"
retriever = ADLSDeltaQueryRetriever(
account_name="adlssynapseeus01",
file_system_name="datacontainer",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
Query Parquet Through SQL (GCS / S3 / ADLS Gen2 / Microsoft Fabric)
# GCS
from datalakesurfer.query.gcs_parquet_query import GCSParquetQueryRetriever
import json
tables = {
"sales": "storagebucket0001/ParquetDataSource",
"account": "storagebucket0001/ParquetDataSource",
}
query = "SELECT * FROM sales UNION ALL SELECT * FROM account where SalesId = 'adz02939-4557-4d44-8247-1a746232c478'"
retriever = GCSParquetQueryRetriever(service_account_info=service_account_dict)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# S3
from datalakesurfer.query.s3_parquet_query import S3ParquetQueryRetriever
import json
tables = {
"sales": "devs3bucket001/ParquetDataSource",
"account": "devs3bucket001/ParquetDataSource",
}
query = "SELECT * FROM sales UNION ALL SELECT * FROM account where SalesId = 'adz02939-4557-4d44-8247-1a746232c478'"
retriever = S3ParquetQueryRetriever(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# Microsoft Fabric
from datalakesurfer.query.fabric_parquet_query import FabricParquetQueryRetriever
tables = {
"product": "devlakehouse.Lakehouse/Files/Product",
"product2": "devlakehouse.Lakehouse/Files/Product"
}
query = "SELECT * FROM product UNION ALL SELECT * FROM product2 where name ='Alice'"
retriever = FabricParquetQueryRetriever(
account_name="onelake",
file_system_name="devworkspace",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# ADLS Gen2
from datalakesurfer.query.adls_parquet_query import ADLSParquetQueryRetriever
tables = {
"sales": "SaleParquetData"
}
query = "SELECT * FROM sales LIMIT 10"
retriever = ADLSParquetQueryRetriever(
account_name="adlssynapseeus01",
file_system_name="datacontainer",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
Query Iceberg Through SQL (GCS / S3 / ADLS Gen2 / Microsoft Fabric)
# GCS
from datalakesurfer.query.gcs_iceberg_query import GCSIcebergQueryRetriever
import json
tables = {
"product": "storagebucket0001/customerdw/mydb/product",
"product2": "storagebucket0001/customerdw/mydb/product"
}
query = "SELECT * FROM product where productId = '0191043d-aac2-4ae6-9409-5f69851ca020' UNION ALL SELECT * FROM product2"
retriever = GCSIcebergQueryRetriever(service_account_info=service_account_dict)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# S3
from datalakesurfer.query.s3_iceberg_query import S3IcebergQueryRetriever
import json
tables = {
"product": "devs3bucket001/customerdw/mydb/product",
"product2": "devs3bucket001/customerdw/mydb/product"
}
query = "SELECT * FROM product UNION ALL SELECT * FROM product2"
retriever = S3IcebergQueryRetriever(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_region=AWS_REGION
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# Microsoft Fabric
from datalakesurfer.query.fabric_iceberg_query import FabricIcebergQueryRetriever
tables = {
"flighttripdata": "devlakehouse.Lakehouse/Files/FlightTripData"
}
query = "SELECT * FROM flighttripdata"
retriever = FabricIcebergQueryRetriever(
account_name="onelake",
file_system_name="devworkspace",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
# ADLS Gen2
from datalakesurfer.query.adls_iceberg_query import ADLSIcebergQueryRetriever
tables = {
"product": "customerdw/mydb/product"
}
query = "SELECT * FROM product"
retriever = ADLSIcebergQueryRetriever(
account_name="adlssynapseeus01",
file_system_name="iceberg-container",
token=token,
expires_on=expires_on
)
result_df = retriever.query(tables=tables, query=query)
print(result_df)
📦 Supported Formats
| Format | ADLS Gen2 (Azure) | Fabric OneLake (Azure) | GCS (GCP) | S3 (AWS) |
|---|---|---|---|---|
| Parquet | . | . | . | . |
| Delta | . | . | . | . |
| Iceberg | . | . | . | . |
| Query Format | ADLS Gen2 (Azure) | Fabric OneLake (Azure) | GCS (GCP) | S3 (AWS) |
|---|---|---|---|---|
| Parquet | . | . | . | . |
| Delta | . | . | . | . |
| Iceberg | . | . | . | . |
🛠Development & Testing
Run tests:
pytest
📄 License
MIT License – See LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file datalakesurfer-0.1.5-py3-none-any.whl.
File metadata
- Download URL: datalakesurfer-0.1.5-py3-none-any.whl
- Upload date:
- Size: 55.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44360cd9015dfaf3b0bcf7c5009a4de62e970672c533bdad1fd90adbd2d18787
|
|
| MD5 |
7b38d7debdd2ab8b48556c9167a2598f
|
|
| BLAKE2b-256 |
7dc5a01cea28f918e150aec6ab7e75ce4b3439bf68a31b1b4248f614840030ad
|