Skip to main content

Simple wrapper (based on turbodbc) for most common MSSQL operations I face day-to-day

Project description

SQL Tools

Description

This is simply a turbodbc wrapper. To reduce boilerplate on routine SQL actions, making a code a little bit cleaner.

Content

Core functionality include:

  • SQLConfig : Collection of connection configuration options. With TurbODBCOptions for advanced options
  • Query : Query object
  • QuerySequence : Collection of Queries
from driven_sql_tool import SQLConfig, TurbODBCOptions, Query, QuerySequence

Usage samples

# prepare sample instanced config ...
turbodbc_options = TurbODBCOptions(autocommit=False, use_async_io=False)
conf = SQLConfig(server=r'server.address', database=r'db', turbodbc_options=turbodbc_options)
# ... or simply set defaults
SQLConfig.default_server = r'default.server'
SQLConfig.default_database = r'default.database'

Most common actions (Data Query Language):

# regular querying
df_res = Query('SELECT * FROM db.schema.table', conf=conf).execute()
# not specifying conf attribute will grab class defaults
df_res = Query('SELECT * FROM db.schema.table').execute()
# the result is also stored in `data` property of `Query` object 
query = Query('SELECT * FROM db.schema.table')
query.execute()
df_res = query.data

# .sql file querying
df_res = Query('./path/to/file.sql').execute()

Parametrized actions

# parametrized insertion ...
Query(
    """
        INSERT INTO db.schema.table
        ([ID], [field1], [field2], [date])
        VALUES (?, ?, ?, ?)
    """, 
    data=df_insert[['ID', 'field1', 'field2', 'date']]
).execute()
# ... or execution
Query('EXEC db.schema.sproc @p1=?, @p2=?', data=df_exec[['p1', 'p2']]).execute()

Running multiple queries

# prepare sequence of queries
queries = QuerySequence()

queries.case_1 = Query('SELECT * FROM db.schema.table', conf=conf_1)
queries.case_2 = Query('./query.sql', conf=conf_2)
queries.case_3 = Query('EXEC db.schema.sproc @p1=?', data=df_exec[['p1']])

# run multiple queries sequentially ...
queries.run_seq()
# ... or, alternatively, in parallel (`joblib`)
queries.run_par()

# then access `data` property
df_res_1 = queries.case_1.data

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

driven_sql_tool-0.1.1.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

driven_sql_tool-0.1.1-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file driven_sql_tool-0.1.1.tar.gz.

File metadata

  • Download URL: driven_sql_tool-0.1.1.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for driven_sql_tool-0.1.1.tar.gz
Algorithm Hash digest
SHA256 6ffd1914427e597c1bc50ae551b2fdb491d16c4ad9f760a4b349c56b4f20fb62
MD5 3f540407376ddac5e041da8a2e55833b
BLAKE2b-256 de1083b261341d0dd74b40b6e35ca1b3d5ae60d745760c33c3bdac84a87188f6

See more details on using hashes here.

File details

Details for the file driven_sql_tool-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for driven_sql_tool-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cef8b44d25106a689abf626887107057fa2a310c8ebb14df26fc8094cc6b703f
MD5 386d309698542157385f49f03dbc8955
BLAKE2b-256 6f63ef338405a048649924e0d63ac4e6842388c65e324bab7caf8d034981c9b6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page