Skip to main content

Simple wrapper (based on turbodbc) for most common MSSQL operations I face day-to-day

Project description

SQL Tools

Description

This is simply a turbodbc wrapper. To reduce boilerplate on routine SQL actions, making a code a little bit cleaner.

Content

Core functionality include:

  • SQLConfig : Collection of connection configuration options. With TurbODBCOptions for advanced options
  • Query : Query object
  • QuerySequence : Collection of Queries
from driven_sql_tool import SQLConfig, TurbODBCOptions, Query, QuerySequence

Usage samples

# prepare sample instanced config ...
turbodbc_options = TurbODBCOptions(autocommit=False, use_async_io=False)
conf = SQLConfig(server=r'server.address', database=r'db', turbodbc_options=turbodbc_options)
# ... or simply set defaults
SQLConfig.default_server = r'default.server'
SQLConfig.default_database = r'default.database'

Most common actions (Data Query Language):

# regular querying
df_res = Query('SELECT * FROM db.schema.table', conf=conf).execute()
# not specifying conf attribute will grab class defaults
df_res = Query('SELECT * FROM db.schema.table').execute()
# the result is also stored in `data` property of `Query` object 
query = Query('SELECT * FROM db.schema.table')
query.execute()
df_res = query.data

# .sql file querying
df_res = Query('./path/to/file.sql').execute()

Parametrized actions

# parametrized insertion ...
Query(
    """
        INSERT INTO db.schema.table
        ([ID], [field1], [field2], [date])
        VALUES (?, ?, ?, ?)
    """, 
    data=df_insert[['ID', 'field1', 'field2', 'date']]
).execute()
# ... or execution
Query('EXEC db.schema.sproc @p1=?, @p2=?', data=df_exec[['p1', 'p2']]).execute()

Running multiple queries

# prepare sequence of queries
queries = QuerySequence()

queries.case_1 = Query('SELECT * FROM db.schema.table', conf=conf_1)
queries.case_2 = Query('./query.sql', conf=conf_2)
queries.case_3 = Query('EXEC db.schema.sproc @p1=?', data=df_exec[['p1']])

# run multiple queries sequentially ...
queries.run_seq()
# ... or, alternatively, in parallel (`joblib`)
queries.run_par()

# then access `data` property
df_res_1 = queries.case_1.data

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

driven_sql_tool-0.0.8.tar.gz (7.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

driven_sql_tool-0.0.8-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file driven_sql_tool-0.0.8.tar.gz.

File metadata

  • Download URL: driven_sql_tool-0.0.8.tar.gz
  • Upload date:
  • Size: 7.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for driven_sql_tool-0.0.8.tar.gz
Algorithm Hash digest
SHA256 611f40cffe2454c2fe4f4ced64ddc8920e34c0802773d3de66b6110edcb36a26
MD5 75de801209b9f5aeb897d4f263c1de94
BLAKE2b-256 6d6e2d463f60b6f9e3756e92eb1c59898c34006aacd722c31bb84f37c1e2fb2e

See more details on using hashes here.

File details

Details for the file driven_sql_tool-0.0.8-py3-none-any.whl.

File metadata

File hashes

Hashes for driven_sql_tool-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 900ddb9df7075e6cf86506e9e9288756e3bd245cf4a894bd349b3fc2eac253ed
MD5 c39ffa41708e8f2b903721e7e73c666f
BLAKE2b-256 376d1549ae4711ecc3dbf7c6e35be56a6c59fec7f494b288af3dc7f7d5dc3baa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page