Skip to main content

SQL Maps

Project description

Introduce

SQLPhile is a Python styled SQL generator. It looks like Django ORM but it hasn’t any relationship with Django or ORM.

But it is inspired by Django ORM and iBATIS SQL Maps.

SQLPhile might be useful for keeping clean look of your app script. It can make hide SQL statements for your script by using Python functions or/and writing SQL templates to seperated files.

For Example,

conn = psycopg2.connect (...)
cursor = conn.cursor ()

cursor.execute ("""
  SELECT type, org, count(*) cnt FROM rc_file
  WHERE org = 1 AND filename LIKE '%OCD'
  GROUP BY type
  ORDER BY org, cnt DESC
  LIMIT 10
  OFFSET 10
""")

This codes can be written with SQLPhile:

sp = SQLPhile ()

conn = psycopg2.connect (...)
cursor = conn.cursor ()

q = sp.ops.select ("rc_file", "type", "count(*) cnt")
q.filter (org = 1, name__endswith = 'OCD')
q.group_by ("type").order_by ("org", "-cnt")[10:20]
cursor.execute (q.as_sql ())

Or you can use SQL template file: sqlmaps/file.sql:

<sql name="get_stat">
  SELECT type, org, count(*) cnt FROM rc_file
  WHERE {_filters}
  GROUP BY type
  ORDER BY org, cnt DESC
  {limit} {_offset}
</sql>

Your app code is,

sp = SQLPhile ("sqlmaps")

conn = psycopg2.connect (...)
cursor = conn.cursor ()

q = sp.file.get_stat.filter (org = 1, name__endswith = 'OCD')[10:20]
cursor.execute (q.as_sql ())

SQLPhile

SQLPhile is main class of this package.

from sqlphile import SQLPhile

sp = SQLPhile (dir = None, auto_reload = False, engine = "postgresql")

Once SQLPhile is created, you can reuse it through entire your app.

Simple Query

SQLPhile provide ops object for generic SQL operation.

q = sp.ops.insert (tbl, name = "Hans", created = datetime.date.today ())
cursor.execute (q.as_sql ())

q = sp.ops.update (tbl, name = "Jenny", modified = datetime.date.today ())
q.filter (...)
cursor.execute (q.as_sql ())

q = sp.ops.select (tbl, "id", "name", "create", "modified")
q.filter (...)
cursor.execute (q.as_sql ())

q = sp.ops.delete (tbl)
q.filter (...)
cursor.execute (q.as_sql ())

Templating For Complex and Highly Customized Query

If you create SQL templates in specific directory,

from sqlphile import SQLPhile

sp = SQLPhile (dir = "./sqlmaps", auto_reload = True)

SQLPhile will load all of your templates in ./sqlmaps.

If you are under developing phase, set auto_reload True.

Assume there is a template file named ‘file.sql’:

<sqlmap version="1.0">

<sql name="get_stat">
  SELECT type, org, count(*) cnt FROM rc_file
  WHERE {_filters}
  GROUP BY type
  ORDER BY org, cnt DESC
  {_limit} {_offset}
</sql>

It looks like XML file, BUT IT’S NOT. All tags - <sqlmap>, <sql></sql> should be started at first of line. But SQL of inside is at your own mind but I recommend give some indentation.

Now you can access each sql temnplate via filename without extension and query name attribute:

# filename.query name
q = sp.file.get_stat
q.filter (...).order_by (...)

# or
q = sp.file.get_stat.filter (...).order_by (...)

Note: filename is default.sql, you can ommit filename.

q = sp.get_stat
q.filter (...).order_by (...)

Note 2: SHOULD NOt use “ops.*” as filename.

Filtering & Excluding

filter function is very simailar with Djnago ORM.

q = sp.get_stat

q.filter (id = 1)
>> id = 1

q.filter (id__exact = 1)
>> id = 1

q.filter (id__eq = 1)
>> id = 1

q.exclude (id = 1)
>> NOT (id = 1)

q.filter (id__neq = 1)
>> id <> 1

q.filter (id__gte = 1)
>> id >= 1

q.filter (id__lt = 1)
>> id < 1

q.filter (id__between = (10, 20))
>> id BETWEEN 10 AND 20

q.filter (name__contains = "fire")
>> name LIKE '%fire%'

q.exclude (name__contains = "fire")
>> NOT name LIKE '%fire%'

q.filter (name__startswith = "fire")
>> name LIKE 'fire%'

# escaping %
q.filter (name__startswith = "fire%20ice")
>> name LIKE 'fire\%20ice%'

q.filter (name__endswith = "fire")
>> name LIKE '%fire'

q.filter (name = None)
>> name IS NULL

q.exclude (name = None)
>> NOT name IS NULL

q.filter (name__isnull = True)
>> name IS NULL

q.filter (name__isnull = False)
>> name IS NOT NULL

Also you can add multiple filters:

q.filter (name__isnull = False, id = 4)
>> name IS NOT NULL AND id = 4

All filters will be joined with “AND” operator.

Q object

How can add OR operator?

from sqlphile import Q

q.filter (Q (id = 4) | Q (email__contains = "org"), name__isnull = False)
>> name IS NOT NULL AND (id = 4 OR email LIKE '%org%')

Note that Q objects are first, keywords arguments late. Also you can add seperatly.

q.filter (name__isnull = False)
q.filter (Q (id = 4) | Q (email__contains = "org"))
>> (id = 4 OR email LIKE '%org%') AND name IS NOT NULL

If making excluding filter with Q use tilde(~),

q.filter (Q (id = 4) | ~Q (email__contains = "org"))
>> (id = 4 OR NOT email LIKE '%org%')

F object

All value will be escaped or automatically add single quotes, but for comparing with other fileds use F.

from sqlphile import F

Q (email = F ("b.email"))
>> email = b.email

Q (email__contains = F ("org"))
>> email LIKE '%' || org || '%'

F can be be used for ops.

q = sp.ops.update (tbl, n_view = F ("n_view + 1"))
q.filter (...)
cursor.execute (q.as_sql ())

Ordering & Grouping

For ordering,

q = sp.ops.select (tbl, "id", "name", "create", "modified")
q.filter (...)
q.order_by ("id", "-modified")
>> ORDER BY id, modified DESC

For grouping,

q = sp.ops.select (tbl, "name", "count(*) cnt")
q.filter (...)
q.group_by ("name")
>> GROUP BY name

      q.having ("count(*) > 10")
>> GROUP BY name HAVING count(*) > 10

Offset & Limit

For limiting record set,

q = sp.ops.select (tbl, "id", "name", "create", "modified")
q [:100]
>> LIMIT 100

q [10:30]
>> LIMIT 20 OFFSET 10

Be careful for slicing and limit count.

Returning

For Returning columns after insertinig or updating data,

q = sp.ops.insert (tbl, name = "Hans", created = datetime.date.today ())
q.returning ("id")

Using Template

Template is like this,

<sqlmap version="1.0">

<sql name="get_stat">
  SELECT type, org, count(*) cnt FROM rc_file
  WHERE {_filters}
  GROUP BY type
  ORDER BY org, cnt DESC
  {_limit} {offset}
</sql>

<sql name="get_file">
  SELECT * cnt FROM rc_file
  WHERE {_filters}
  {_order_by}
  {_limit}
  {_offset}
</sql>

You just fill variables your query reqiures,

q = sp.file.get_file.filter (id__gte = 1000)[:20]
q.order_by ("-id")

Current reserved variables are,

  • _filters

  • _group_by

  • _order_by

  • _limit

  • _offset

  • _having

  • _returning

  • _columns: comma joined column list fed by data ()

  • _values: comma joined value list fed by data ()

  • _pairs: comma joined column=value list fed by data ()

Variablize Your Query

You can add variable on your sql.

<sql name="get_file">
  SELECT {cols} cnt FROM {tbl}
  WHERE {_filters}
</sql>

Now feed keywords args with feed ():

q = sp.file.get_file.filter (id__gte = 1000)
q.feed (cols = "id, name, created", tbl = "rc_file")

Also you can feed values with similar way,

<sql name="get_file">
  INSERT INTO {tbl} (name, create, birth_year)
  VALUES ({name}, {created}, {birth_year})
</sql>
q = sp.file.get_file
q.feed (tbl = "rc_file")
q.data (name = "Hans Roh", created = datetime.date.today (), birth_year = 2000)

What differences with feed? data () will escape values for fitting SQL. You needn’t care about sing quotes, escaping or type casting on date time field.

Actually, feed () can be omitable,

# like instance constructor
q = sp.file.get_file (tbl = "rc_file")
q.data (name = "Hans Roh", created = datetime.date.today (), birth_year = 2000)

What differences with feed? data () will escape values for fitting SQL. You needn’t care about sing quotes, escaping or type casting on date time field.

Data Encoding

Sometimes, data() method is insufficient. D object convert dictionary into SQL column and value format and can feed them into SQL template.

q = sp.file.get_file
d = q.encode (name = "Hans", id = 1, email = None)
d.values
>> 'Hans', 1, NULL

d.columns
>> name, id, email

d.pairs
>> name = 'Hans', id = 1, email = NULL

And you can feed to template.

<sql name="get_file">
  INSERT ({_columns}, {cols})
  VALUES ({_valuess}, {vals})
  {_returning};
</sql>

In app,

q = sp.file.get_file
      E = q.encode (name = 'Hans', id = 1)
      q.feed (cols = E.columns, vals = E.values).data (area = "730")
      q.returning ("id")
cursor.execute (q.as_sql ())

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlphile-0.1.2b4.tar.gz (10.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sqlphile-0.1.2b4-py3-none-any.whl (6.0 kB view details)

Uploaded Python 3

File details

Details for the file sqlphile-0.1.2b4.tar.gz.

File metadata

  • Download URL: sqlphile-0.1.2b4.tar.gz
  • Upload date:
  • Size: 10.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for sqlphile-0.1.2b4.tar.gz
Algorithm Hash digest
SHA256 8c9828937e29cfd2082ba832e92e0049ddd87b71536b643659c8b9f546dc4010
MD5 912c7e5ca1bb2aa8686ae8527a9a0be9
BLAKE2b-256 acf4039036999786a12ceb9e696e809cda9de99ba91993525d8ef5cfab63fec8

See more details on using hashes here.

File details

Details for the file sqlphile-0.1.2b4-py3-none-any.whl.

File metadata

File hashes

Hashes for sqlphile-0.1.2b4-py3-none-any.whl
Algorithm Hash digest
SHA256 a0db53711852f623ef6d326ca7e37b24de9add4493f58ee4696b49fc71aadcc1
MD5 5464a5c6eb7e110575c43c43e4e10aa3
BLAKE2b-256 46118d56dfc657025462eb0013a58e342366af4e05969a78f6ce5276ef901cc8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page