Skip to main content

PySQL is a Python library for interacting with SQL Server databases. It provides a simplified methodes for common database operations.

Project description

PySQL

PySQL is a Python library for interacting with SQL Server databases. It provides a simplified interface for common database operations.

Sublime's custom image


## Installation

pySQL requires the following packages:

  • pandas
  • numpy
  • sqlalchemy==1.4.32
  • pyodbc
pip install pandas numpy sqlalchemy==1.4.32 pyodbc

Usage

Import PySQL and Table_analyzer:

from pySqlServer import PySQL, Table_analyzer


Table_analyzer usage:

A helper class for analyzing Pandas DataFrames to determine optimal SQL datatypes.

df = pd.read_csv('test.csv')
TA = Table_analyzer()
dtype_dict = TA.analyze(df,texts_buffer=0.2)
  • for texts_buffer > 1 : N in nvarchar(N) sets to len_max_length + texts_buffer
  • for 0 < texts_buffer < 1 : N in nvarchar(N) sets to len_max_length + texts_buffer*len_max_length (extra percentage)


PySQL usage:

  • Create a connection
pysql = PySQL()
pysql.create_connection(server='host_ip', database='mydb', username='myuser', password='mypassword')

your user must have 'db_datareader', 'db_datawriter', 'db_ddlAdmin' permissions to module works perfectly


  • at first use (for every tables) you must call create_dtypes()
dtype_dict = {'col1':sql.sqltypes.INTEGER , 'col2':sql.sqltypes.NVARCHAR(100)} # it's suggested to use Table_analyzer to calculate optimal dtype_dict
# dtype_dict = TA.analyze(df,texts_buffer=0.2) 
pysql.create_dtypes(dtype_dict=dtype_dict, table_name='Test_table', schema='Test_schema')

  • at all next usages you must call load_dtypes and next you can use to_sql method to send data
pysql.load_dtypes(table_name='Test_table', schema='Test_schema')    # created before
pysql.to_sql(df,'Test_table', schema='Test_schema', if_exists='append', text_cutter=True, date_normalizer=True, method='multi', verbos=True)
  • you can use primary_key='column_name' to set tables primary_key
  • in next usages it's not allowed to use this
  • 'text_cutter' trys to cut new text if those length was taller than column capacity
  • 'date_normalizer' trys to make date format colums suitable for sql server
  • method='multi' for insert multi row in ine query
  • verbos=True --> show progress bar for your data transfer (default:False)
  • and there is some read data methods in order to read data from your database (returns pandas dataframe)
pysql.tables_list(schema=None)
pysql.read_sql_table(table_name, schema=None)
pysql.read_sql_query(query='SELECT * FROM TABLE_NAME')

  • also you can update your table with a update_key and update_value

this line should update age of someone in the table named Bob Lookie to 83 YO!

pysql.update_table(table_name='personal_data', schema='persons', update_key={'name':'Bob', 'last_name':'Looki'}, update_value={'age':83})

  • logger is a common method to use ( it logs datetime and process_id and actor_user in order to make your actions trackable )

every to_sql calls can join with logs with 'process_id' column , so you can find who and when it's started to store data and how long takes it process

pysql.logger('create_connection', 'success', 'connected')


  • auto log system logs all your method calls like bellow sample:

auto_log sample:

function state log connection_user process_id datetime
create_connection success connected pysql_user -1 2023-08-12 16:04:22.000
read_sql_table start success pysql_user 0 2023-08-12 16:04:22.000
read_sql_table end success pysql_user 0 2023-08-12 16:04:23.000
create_dtypes start success pysql_user 1 2023-08-12 16:04:23.000
create_dtypes end success pysql_user 1 2023-08-12 16:04:23.000
load_dtypes start success pysql_user 2 2023-08-12 16:04:24.000
load_dtypes end success pysql_user 2 2023-08-12 16:04:24.000
to_sql start success pysql_user 3 2023-08-12 16:04:24.000
to_sql end success pysql_user 3 2023-08-12 16:04:32.000

data to_sql sample:

user_id id created_at lang favorite_count quote_count reply_count retweet_count views_count bookmark_count
*****17323920629770 1647162******634625 2023-04-15 08:57:34.000 en 2 0 0 0 177 1
*****0417 1647153******613570 2023-04-15 08:23:18.000 en 1 0 1 0 12 0
*****49585152565249 1682090******974593 2023-07-20 18:08:18.000 en 642 83 65 213 1749178 8
*****87375859568642 1647152******957248 2023-04-15 08:18:00.000 ar 22 8 0 0 7 0
*****0013028323329 1647127******033537 2023-04-15 06:40:00.000 en 2 0 1 0 84 2

dtypes table sample:

table schema dtypes_str proccess_id
T1 Twitter {'user_id': 'types.BIGINT()', ... , 'bookmark_count': 'types.INTEGER()'} 1
T2 Twitter {'user_profile_banner_url': 'types.NVARCHAR(88)',..., 'user_description_urls': 'types.NVARCHAR(443)'} 8
T3 Twitter {'in_reply_to_status_id_str': 'types.BIGINT()',..., 'is_quote': 'types.BOOLEAN()'} 11

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pySqlServer-3.1.0.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pySqlServer-3.1.0-py3-none-any.whl (11.3 kB view details)

Uploaded Python 3

File details

Details for the file pySqlServer-3.1.0.tar.gz.

File metadata

  • Download URL: pySqlServer-3.1.0.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for pySqlServer-3.1.0.tar.gz
Algorithm Hash digest
SHA256 b607abcdfd536fab15cb5b7cc855f6e233fc4af69b33d0bc067c545d844011fc
MD5 451d331e4999a99748267801d4d4192a
BLAKE2b-256 9b853cdb3bb46d8425fefd0bd759aef841a5ac59ccfe825dea9233ba0e2134d4

See more details on using hashes here.

File details

Details for the file pySqlServer-3.1.0-py3-none-any.whl.

File metadata

  • Download URL: pySqlServer-3.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for pySqlServer-3.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e6ededb9cab82a2e557776cdc01745db4d460622121130f2f823568c2284a946
MD5 e31fc2991424663aacb16889d8938388
BLAKE2b-256 4396aa8f3d46222e429efbb5e4b82a45953333cbc3636473be46e05f18054c70

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page