Packages for fast dataflow and workflow processing
Project description
MLFastFlow
A Python package for fast dataflow and workflow processing.
Installation
pip install mlfastflow
Features
- Easy-to-use data sourcing with the Sourcing class
- Flexible vector search capabilities
- Optimized for data processing workflows
- Powerful BigQuery integration with support for:
- Table operations (create, truncate, delete)
- Asynchronous query execution for long-running jobs
- Efficient data transfer between BigQuery and GCS
- Advanced GCS folder management capabilities
Quick Start
from mlfastflow import Sourcing
# Create a sourcing instance
sourcing = Sourcing(
query_df=your_query_dataframe,
db_df=your_database_dataframe,
columns_for_sourcing=["column1", "column2"],
label="your_label"
)
# Process your data
sourced_db_df_without_label, sourced_db_df_with_label = (
sourcing.sourcing()
)
BigQuery Integration
MLFastFlow provides a powerful BigQueryClient class for seamless integration with Google BigQuery and Google Cloud Storage (GCS).
Initialization
from mlfastflow import BigQueryClient
# Initialize the client with your GCP credentials
bq_client = BigQueryClient(
project_id="your-gcp-project-id",
dataset_id="your_dataset",
key_file="/path/to/your/service-account-key.json"
)
Running SQL Queries
# Execute a SQL query and get results as a pandas DataFrame
df = bq_client.sql2df("SELECT * FROM your_dataset.your_table LIMIT 10")
# Run a query without returning results
bq_client.run_sql("CREATE TABLE your_dataset.new_table AS SELECT * FROM your_dataset.source_table")
# Execute a long-running query asynchronously and get the job_id for status checking
job_id = bq_client.run_sql("CREATE TABLE your_dataset.large_table AS SELECT * FROM your_dataset.huge_table")
# Check the status of an asynchronous query job
job_status = bq_client.check_job_status(job_id)
Table Operations
# Truncate a table (remove all rows while preserving schema)
bq_client.truncate_table("your_table_name")
DataFrame to BigQuery
import pandas as pd
# Create a sample DataFrame
df = pd.DataFrame({
'id': [1, 2, 3],
'name': ['Alice', 'Bob', 'Charlie'],
'value': [100, 200, 300]
})
# Upload DataFrame to BigQuery
bq_client.df2table(
df=df,
table_id="your_table_name",
if_exists="fail" # Options: 'fail', 'append'
)
BigQuery to Google Cloud Storage
# Export query results to GCS as Parquet files (default)
bq_client.sql2gcs(
sql="SELECT * FROM your_dataset.your_table",
destination_uri="gs://your-bucket/path/to/export/",
destination_format="PARQUET" # Options: 'PARQUET', 'CSV', 'JSON', 'AVRO'
)
# Export large query results with control over file sizes using SQL EXPORT DATA
bq_client.sql2gcs_via_query(
sql="SELECT * FROM your_dataset.large_table",
destination_uri="gs://your-bucket/path/to/export/data-*.parquet",
destination_format="PARQUET",
max_file_size="5GB" # Control output file size
)
# Save SQL query text to GCS for documentation/audit purposes
bq_client.save_sql_to_gcs(
sql_content="SELECT * FROM your_dataset.your_table WHERE date = '2025-05-08'",
bucket_name="your-bucket",
blob_name="queries/daily_extract.sql",
metadata={"purpose": "daily_extraction", "author": "data_team"}
)
Google Cloud Storage to BigQuery
# Load data from GCS to BigQuery
bq_client.gcs2table(
gcs_uri="gs://your-bucket/path/to/data/*.parquet",
table_id="your_destination_table",
write_disposition="WRITE_TRUNCATE", # Options: 'WRITE_TRUNCATE', 'WRITE_APPEND', 'WRITE_EMPTY'
source_format="PARQUET" # Options: 'PARQUET', 'CSV', 'JSON', 'AVRO', 'ORC'
)
GCS Folder Management
# Create a proper folder in GCS that appears in the GCS Console
bq_client.create_gcs_folder("gs://your-bucket/new-folder/")
# Delete a folder and all its contents
success, deleted_count = bq_client.delete_gcs_folder(
gcs_folder_path="gs://your-bucket/folder-to-delete/",
dry_run=True # Set to False to actually delete
)
print(f"Would delete {deleted_count} files" if success else "Error occurred")
Resource Management
# Explicitly close the client when done to free resources
bq_client.close()
del bq_client
bq_client = None
Utility Functions
CSV to Parquet Conversion
Convert CSV files to the more efficient Parquet format using high-performance Polars with LazyFrame processing:
from mlfastflow import csv2parquet
# Convert a single CSV file to Parquet
csv2parquet("path/to/file.csv")
# Convert all CSV files in a directory
csv2parquet("path/to/directory")
# Convert all CSV files in a directory and its subdirectories
csv2parquet("path/to/directory", sub_folders=True)
# Specify a custom output directory
csv2parquet("path/to/source", output_dir="path/to/destination")
This function efficiently handles large CSV files and directories with many files, leveraging Polars' LazyFrame for better performance and lower memory usage compared to pandas.
For more detailed examples and advanced usage, refer to the documentation.
License
MIT
Author
Xileven
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mlfastflow-0.2.3.1.tar.gz.
File metadata
- Download URL: mlfastflow-0.2.3.1.tar.gz
- Upload date:
- Size: 36.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c5bc8d2ec9b06ca8ee8fbd1ad6c76b99d91e67aab2da0d7ba3c1017a7b9596d9
|
|
| MD5 |
ade4d62a33e11ff3ee9166264469675f
|
|
| BLAKE2b-256 |
50419baf83dd5a8533962e468a72602b79613ab15cd9e4fba3093644b111b371
|
File details
Details for the file mlfastflow-0.2.3.1-py3-none-any.whl.
File metadata
- Download URL: mlfastflow-0.2.3.1-py3-none-any.whl
- Upload date:
- Size: 37.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b43cbb7128d504057753f12fe3490d7a2162cbfe4cb1efbe1e8c65217f5cdde6
|
|
| MD5 |
5cd2140104157156629e7b8702e8b2e8
|
|
| BLAKE2b-256 |
4da9fc556cc911386244ff5d89c59c7e7bf48e073d3e7651928fc4b095c232fe
|