Skip to main content

FOSFOR-IO: To read and write dataframe from different connectors.

Project description

Installation:

Without any dependencies:

pip install fosforio

With all dependencies:

pip install fosforio[all]

With snowflake:

pip install fosforio[snowflake]

With s3:

pip install fosforio[s3]

With azureblob:

pip install fosforio[azureblob]

With local:

pip install fosforio[local]

With sftp:

pip install fosforio[sftp]

With mysql:

pip install fosforio[mysql]

With hive:

pip install fosforio[hive]

With sqlserver:

pip install fosforio[sqlserver]

With postgres:

pip install fosforio[postgres]

Source code is available at: https://gitlab.fosfor.com/fosfor-decision-cloud/intelligence/refract-sdk

Usage:

To read dataframe with dataset name only -

from fosforio import get_dataframe
get_dataframe("dataset_name")

# To read top 3 records from dataframe with filter condition of col1=value1 and col2>value2 and sort by col1.
get_dataframe("dataset_name", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# For reading data from any other connections not listed here, please pip install mosaic-connector-python package.

To read dataframe with filename from local storage -

from refracio import get_local_dataframe
get_local_dataframe("local_file_name_with_absolute_path", row_count=3)

To use snowflake related operations -

from fosforio import snowflake

# To get snowflake connection object with a default snowflake connection created by the user, if available.
snowflake.get_connection()

# To get snowflake connection object with a specific connection name
snowflake.get_connection(connection_name="snowflake_con_name")

# To read a specific dataset published from a snowflake connection
snowflake.get_dataframe("dataset_name")

# To read a specific dataset published from a snowflake connection with only top few records.
snowflake.get_dataframe("dataset_name", row_count=3)

# To read a specific dataset published from a snowflake connection with only top few records and filter conditions.
snowflake.get_dataframe("dataset_name", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# To execute a user specific query in snowflake, with the specified connection name.
snowflake.execute_query(query="user_query", database="db_name", schema="schema", connection_name="connection_name")

# To execute a user specific query in snowflake, with the current connection object or with the default connection for the user.
snowflake.execute_query(query="user_query", database="db_name", schema="schema")

# To close snowflake connection, please do close the connection after use!
snowflake.close_connection()

To use mysql related operations -

from fosforio import mysql

# To get mysql connection object with a default mysql connection created by the user, if available.
mysql.get_connection()

# To get mysql connection object with a specific connection name
mysql.get_connection(connection_name="mysql_con_name")

# To read a specific dataset published from a mysql connection
mysql.get_dataframe("dataset_name")

# To read a specific dataset published from a mysql connection with only top few records.
mysql.get_dataframe("dataset_name", row_count=3)

# To read a specific dataset published from a mysql connection with only top few records and filter conditions.
mysql.get_dataframe("dataset_name", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# To execute a user specific query in mysql, with the specified connection name.
mysql.execute_query(query="user_query", connection_name="connection_name")

# To execute a user specific query in mysql, with the current connection object or with the default connection for the user.
mysql.execute_query(query="user_query")

# To close mysql connection, please do close the connection after use!
mysql.close_connection()

To use sqlserver related operations -

Requires sqlserver driver library

# Create a custom template with the following commands added in "Pre Init Script" section,
# sudo curl -o /etc/yum.repos.d/mssql-release.repo https://packages.microsoft.com/config/rhel/9.0/prod.repo
# sudo ACCEPT_EULA=Y yum install -y msodbcsql18
from fosforio import sqlserver

# To get sqlserver connection object with a default sqlserver connection created by the user, if available.
sqlserver.get_connection()

# To get sqlserver connection object with a specific connection name
sqlserver.get_connection(connection_name="sqlserver_con_name")

# To read a specific dataset published from a sqlserver connection
sqlserver.get_dataframe("dataset_name")

# To read a specific dataset published from a sqlserver connection with only top few records.
sqlserver.get_dataframe("dataset_name", row_count=3)

# To read a specific dataset published from a sqlserver connection with only top few records and filter conditions.
sqlserver.get_dataframe("dataset_name", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# To execute a user specific query in sqlserver, with the specified connection name.
sqlserver.execute_query(query="user_query", database="db_name", connection_name="connection_name")

# To execute a user specific query in sqlserver, with the current connection object or with the default connection for the user.
sqlserver.execute_query(query="user_query", database="db_name")

# To close sqlserver connection, please do close the connection after use!
sqlserver.close_connection()

To use hive related operations -

from fosforio import hive

# To get hive connection object with a default hive connection created by the user, if available. User id is required (1001 is default user_id used).
hive.get_connection(user_id=1001)

# To get hive connection object with a specific connection name, User id is required (1001 is default user_id used).
hive.get_connection(connection_name="hive_con_name", user_id=1001)

# To read a specific dataset published from a hive connection. User id is required (1001 is default user_id used).
hive.get_dataframe("dataset_name", user_id="1001")

# To read a specific dataset published from a hive connection with only top few records. User id is required (1001 is default user_id used)
hive.get_dataframe("dataset_name", user_id="1001", row_count=3)

# To read a specific dataset published from a hive connection with only top few records and filter conditions. User id is required (1001 is default user_id used)
hive.get_dataframe("dataset_name", user_id="1001", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# To execute a user specific query in hive, with the specified connection name. User id is required (1001 is default user_id used).
hive.execute_query(query="user_query", connection_name="connection_name", user_id="1001")

# To execute a user specific query in hive, with the current connection object or with the default connection for the user. User id is required (1001 is default user_id used).
hive.execute_query(query="user_query", user_id="1001")

# To close hive connection, please do close the connection after use!
hive.close_connection()

To use postgres related operations -

from fosforio import postgres

# To get postgres connection object with a default postgres connection created by the user, if available.
postgres.get_connection()

# To get postgres connection object with a specific connection name
postgres.get_connection(connection_name="mysql_con_name")

# To read a specific dataset published from a postgres connection
postgres.get_dataframe("dataset_name")

# To read a specific dataset published from a postgres connection with only top few records.
postgres.get_dataframe("dataset_name", row_count=3)

# To read a specific dataset published from a postgres connection with only top few records and filter conditions.
postgres.get_dataframe("dataset_name", row_count=3, filter_condition="where col1='value1' and col2>value2 order by col1")

# To execute a user specific query in postgres, with the specified connection name.
postgres.execute_query(query="user_query", connection_name="connection_name")

# To execute a user specific query in postgres, with the current connection object or with the default connection for the user.
postgres.execute_query(query="user_query")

# To close postgres connection, please do close the connection after use!
postgres.close_connection()

To use sftp related operations -

from fosforio import sftp

# To get sftp connection object with a default sftp connection created by the user, if available.
sftp.get_connection()

# To get sftp connection object with a specific connection name
sftp.get_connection(connection_name="sftp_con_name")

# To read a specific dataset published from a sftp connection
sftp.get_dataframe("dataset_name")

# To read a specific dataset published from a sftp connection with only top few records.
sftp.get_dataframe("dataset_name", row_count=3)

# Use sftp connection object c to do any operation related to sftp like (get, put, listdir etc)
c = sftp.get_connection()

# To close sftp connection, please do close the connection after use!
sftp.close_connection()

To use amazon S3 related operations -

from fosforio import s3

# To get s3 connection object with a default s3 connection created by the user, if available.
s3.get_connection()

# To get s3 connection object with a specific connection name
s3.get_connection(connection_name="s3_con_name")

# To read a specific dataset published from a s3 connection
s3.get_dataframe("dataset_name")

# To read a specific dataset published from a s3 connection with only top few records.
s3.get_dataframe("dataset_name", row_count=3)

# Use s3 connection object c to do any operation related to s3.
c = s3.get_connection()

To use azure blob related operations -

from fosforio import azure

# To get azure blob connection object with a default azure connection created by the user, if available.
azure.get_connection()

# To get azure blob connection object with a specific connection name
azure.get_connection(connection_name="azureblob_con_name")

# To read a specific dataset published from a azureblob connection
azure.get_dataframe("dataset_name")

# To read a specific dataset published from a azure connection with only top few records.
azure.get_dataframe("dataset_name", row_count=3)

# Use azure connection object c to do any operation related to azure.
c = azure.get_connection()

Note: Currently supported native connectors - snowflake, mysql, hive, sqlserver, postgres, sftp, s3, azureblob, local(NAS)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fosforio-1.0.2.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

fosforio-1.0.2-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file fosforio-1.0.2.tar.gz.

File metadata

  • Download URL: fosforio-1.0.2.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.16

File hashes

Hashes for fosforio-1.0.2.tar.gz
Algorithm Hash digest
SHA256 85df6e8a7fa9aa951214bc7f92c3f8ad5fd4371ad6df3aec2d46f08f00a1ed65
MD5 13177bb389b90b07958391c1c0483968
BLAKE2b-256 cd56c3751da8473e3eb92c7edcc8505bf3768ea6a70aa84bdd0f7ff03c4cf564

See more details on using hashes here.

Provenance

File details

Details for the file fosforio-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: fosforio-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.16

File hashes

Hashes for fosforio-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 fdc95bd3561258db1225f851e52c6816830b842f3d9d688a719ea731fb9ae805
MD5 23dcad872c71746a68d25054c9e0e03a
BLAKE2b-256 5b98b5e8b5febc280807ac2ebabd7d8f1e1986add138a393fd03647115a200ad

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page