Skip to main content

A DQ package

Project description

Requirements

pip install -r requirements.txt

Env

.env File

API_KEY=xyz
API_URL=xyz

Or Export variables

export API_KEY=xyz
export API_URL=xyz

Database User

GRANT USAGE on schema "validation" to anon;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA "validation" TO anon;
GRANT ALL ON SEQUENCE validation.rule_output_rule_output_id_seq TO anon;
GRANT ALL ON SEQUENCE validation. connections_connection_id_seq TO anon;
GRANT ALL ON SEQUENCE validation.owlcheck_q_job_id_seq TO anon;
GRANT ALL ON SEQUENCE validation.job_log_log_id_seq TO anon;
GRANT ALL ON SEQUENCE validation.assignment_q_id_seq TO anon;

Examples

"""
# Invididual Usage Examples 

#Client instantiation
api = APIClient(api_client)

engine = duckdb.connect(':memory:')
engine.sql(" select * from read_csv_auto('./data/fake_customers.csv') limit 10").show()
engine.close()

# owl_check_history, Delete, Insert, Read
api.delete_owl_check_history("test")
api.insert_owl_check_history("test", "2024-09-16")
rs = api.get_owl_check_history("test")
print(rs)

# owl_catalog, Delete, Insert, Read
api.delete_owl_catalog("test")
api.insert_owl_catalog("test")
rs = api.get_owl_catalog("test")
print(rs)

# dataset_schema, Delete, Insert, Read
api.delete_dataset_schema("test")
api.insert_dataset_schema("test")
rs = api.get_dataset_schema("test")
print(rs)

# dataset_field, Delete, Insert, Read
api.delete_dataset_field("test", "2024-09-16")
api.insert_dataset_field("test", "2024-09-16")
rs = api.get_dataset_field("test", "2024-09-16")
print(rs)

# Print the result
df = pd.DataFrame(rs.data)
print(df[['dataset','run_id','rc']])

# run rules 
api.delete_rule_output("test", "2024-09-16")
api.run_rules("test", "2024-09-16")

# scoring
rule_output = api.get_rule_output("test", "2024-09-16")
rule_score = 0
for r in rule_output.data:
    rule_score += r['score']
print(rule_score)

# dataset_scan, Delete, Insert, Read
delete_record = api.delete_dataset_scan("test", "2024-09-16")
add_record = api.insert_dataset_scan("test", "2024-09-16", 100, 100 - rule_score)

Register

dataset = 'test' 

# opt_spark
api.delete_opt_spark(dataset)
api.insert_opt_spark(dataset)
rs = api.get_opt_spark(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_pushdown
api.delete_opt_pushdown(dataset)
api.insert_opt_pushdown(dataset)
rs = api.get_opt_pushdown(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_profile
api.delete_opt_profile(dataset)
api.insert_opt_profile(dataset)
rs = api.get_opt_profile(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_load
api.delete_opt_load(dataset)
api.insert_opt_load(dataset)
rs = api.get_opt_load(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_profile
api.delete_opt_profile(dataset)
api.insert_opt_profile(dataset)
rs = api.get_opt_profile(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_env
api.delete_opt_env(dataset)
api.insert_opt_env(dataset)
rs = api.get_opt_env(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

# opt_owl
api.delete_opt_owl(dataset)
api.insert_opt_owl(dataset)
rs = api.get_opt_owl(dataset)
print(rs)
df = pd.DataFrame(rs.data)
display(df)

Job

dataset = 'test' 
run_id = '2024-09-20'
conn.sql(f"create table if not exists {dataset} as select * from read_csv_auto('./data/fake_customers.csv') ")

# owl_check_history, 
# Delete, Insert, Read
api.delete_owl_check_history(dataset)
api.insert_owl_check_history(dataset, run_id)
rs = api.get_owl_check_history(dataset)
print(rs)

# owl_catalog, 
# Delete, Insert, Read
api.delete_owl_catalog(dataset)
api.insert_owl_catalog(dataset)
rs = api.get_owl_catalog(dataset)
print(rs)

# dataset_schema, 
# Delete, Insert, Read
api.delete_dataset_schema(dataset)
api.insert_dataset_schema(dataset)
rs = api.get_dataset_schema(dataset)
print(rs)

# dataset_field, 
# Delete, Insert, Read
api.delete_dataset_field(dataset, run_id)
api.insert_dataset_field(dataset, run_id)
rs = api.get_dataset_field(dataset, run_id)
print(rs)

# run rules 
api.delete_rule_output(dataset, run_id)
api.run_rules(dataset, run_id)

# scoring
rule_output = api.get_rule_output(dataset, run_id)
print(rule_output.data)

rule_score = 0
for r in rule_output.data:
    rule_score += r['score']
print(str(rule_score))

# dataset_scan, 
# Delete, Insert, Read
delete_record = api.delete_dataset_scan(dataset, run_id)
add_record = api.insert_dataset_scan(dataset, run_id, 100, 100 - rule_score)
rs = api.get_dataset_scan(dataset, run_id)
print(rs.data)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

duckdq-0.0.4.tar.gz (2.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

duckdq-0.0.4-py3-none-any.whl (2.3 kB view details)

Uploaded Python 3

File details

Details for the file duckdq-0.0.4.tar.gz.

File metadata

  • Download URL: duckdq-0.0.4.tar.gz
  • Upload date:
  • Size: 2.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for duckdq-0.0.4.tar.gz
Algorithm Hash digest
SHA256 025284b62cdac2e0015ddf2ef8de2f2f22d27af742a0eca71f9225e4b7e15fb1
MD5 bad276851e67df108ac070a89d566e89
BLAKE2b-256 9b0ff2b5b1f4285039fae75ed3e0db34c721c37b0e436e1f1ddab1508602d1c3

See more details on using hashes here.

File details

Details for the file duckdq-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: duckdq-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 2.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for duckdq-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 53dcccd6638a032bc6541db26dab8a161faefc1b61b732aeabb61af0be9dd413
MD5 54bb24f85d5ee2c297f241b45864553b
BLAKE2b-256 83a375ba54f91c07eee73c43534075071d7f4b77943d4ff1cfbe5bc29655ff37

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page