An easily customizable SQL parser and transpiler
Project description
SQLGlot
SQLGlot is a no dependency Python SQL parser and transpiler. It can be used to format SQL or translate between different dialects like Presto, Spark, and Hive. It aims to read a wide variety of SQL inputs and output syntatically correct SQL in the targeted dialects.
It is currently the fastest pure-Python SQL parser.
You can easily customize the parser to support UDF's across dialects as well through the transform API.
Syntax errors are highlighted and dialect incompatibilities can warn or raise depending on configurations.
Install
From PyPI
pip3 install sqlglot
Or with a local checkout
pip3 install -e .
Examples
Easily translate from one dialect to another. For example, date/time functions vary from dialects and can be hard to deal with.
import sqlglot
sqlglot.transpile("SELECT EPOCH_MS(1618088028295)", read='duckdb', write='hive')
SELECT TO_UTC_TIMESTAMP(FROM_UNIXTIME(1618088028295 / 1000, 'yyyy-MM-dd HH:mm:ss'), 'UTC')
Formatting and Transpiling
Read in a SQL statement with a CTE and CASTING to a REAL and then transpiling to Spark.
Spark uses backticks as identifiers and the REAL type is transpiled to FLOAT.
import sqlglot
sql = """WITH baz AS (SELECT a, c FROM foo WHERE a = 1) SELECT f.a, b.b, baz.c, CAST("b"."a" AS REAL) d FROM foo f JOIN bar b ON f.a = b.a LEFT JOIN baz ON f.a = baz.a"""
sqlglot.transpile(sql, write='spark', identify=True, pretty=True)[0])
WITH baz AS (
SELECT
`a`,
`c`
FROM `foo`
WHERE
`a` = 1
)
SELECT
`f`.`a`,
`b`.`b`,
`baz`.`c`,
CAST(`b`.`a` AS FLOAT) AS d
FROM `foo` AS f
JOIN `bar` AS b ON
`f`.`a` = `b`.`a`
LEFT JOIN `baz` ON
`f`.`a` = `baz`.`a`
Custom Transforms
A simple transform on types can be accomplished by providing a dict of Expression/TokenType => lambda/string
from sqlglot import *
transpile("SELECT CAST(a AS INT) FROM x", transforms={TokenType.INT: 'SPECIAL INT'})[0]
SELECT CAST(a AS SPECIAL INT) FROM x
More complicated transforms can be accomplished by using the Tokenizer, Parser, and Generator directly.
In this example, we want to parse a UDF SPECIAL_UDF and then output another version called SPECIAL_UDF_INVERSE with the arguments switched.
from sqlglot import *
from sqlglot.expressions import Func
class SpecialUDF(Func):
arg_types = {'a': True, 'b': True}
tokens = Tokenizer().tokenize("SELECT SPECIAL_UDF(a, b) FROM x")
Here is the output of the tokenizer.
[
<Token token_type: TokenType.SELECT, text: SELECT, line: 0, col: 0>,
<Token token_type: TokenType.VAR, text: SPECIAL_UDF, line: 0, col: 7>,
<Token token_type: TokenType.L_PAREN, text: (, line: 0, col: 18>,
<Token token_type: TokenType.VAR, text: a, line: 0, col: 19>,
<Token token_type: TokenType.COMMA, text: ,, line: 0, col: 20>,
<Token token_type: TokenType.VAR, text: b, line: 0, col: 22>,
<Token token_type: TokenType.R_PAREN, text: ), line: 0, col: 23>,
<Token token_type: TokenType.FROM, text: FROM, line: 0, col: 25>,
<Token token_type: TokenType.VAR, text: x, line: 0, col: 30>,
]
expression = Parser(functions={
'SPECIAL_UDF': SpecialUDF.from_arg_list,
}).parse(tokens)[0]
The expression tree produced by the parser.
(FROM this:
(TABLE this: x, db: ), expression:
(SELECT expressions:
(COLUMN this:
(FUNC a:
(COLUMN this: a, db: , table: ), b:
(COLUMN this: b, db: , table: )), db: , table: )))
Finally generating the new SQL.
Generator(transforms={
SpecialUDF: lambda self, e: f"SPECIAL_UDF_INVERSE({self.sql(e, 'b')}, {self.sql(e, 'a')})"
}).generate(expression)
SELECT SPECIAL_UDF_INVERSE(b, a) FROM x
Transform Trees
There is also a way to transform the parsed tree directly by applying a mapping function to each tree node recursively.
import sqlglot
import sqlglot.expressions as exp
expression_tree = sqlglot.parse_one("SELECT a FROM x")
def transformer(node):
if isinstance(node, exp.Column) and node.args["this"].text == "a":
return sqlglot.parse_one("FUN(a)")
return node
transformed_tree = expression_tree.transform(transformer)
transformed_tree.sql()
The snippet above produces the following transformed expression:
SELECT FUN(a) FROM x
Parse Errors
A syntax error will result in an parse error.
transpile("SELECT foo( FROM bar")
sqlglot.errors.ParseError: Expected )
SELECT foo( __FROM__ bar
Unsupported Errors
Presto APPROX_DISTINCT supports the accuracy argument which is not supported in Spark.
transpile(
'SELECT APPROX_DISTINCT(a, 0.1) FROM foo',
read='presto',
write='spark',
)
WARNING:root:APPROX_COUNT_DISTINCT does not support accuracy
SELECT APPROX_COUNT_DISTINCT(a) FROM foo
Rewrite Sql
Modify sql expressions like adding a CTAS
from sqlglot import Generator, parse_one
from sqlglot.rewriter import Rewriter
expression = parse_one("SELECT * FROM y")
Rewriter(expression).ctas('x').expression.sql()
CREATE TABLE x AS SELECT * FROM y
Benchmarks
Benchmarks run on Python 3.9.6 in seconds.
Query | sqlglot | sqlparse | moz_sql_parser | sqloxide |
---|---|---|---|---|
short | 0.00042 | 0.00079 | 0.00129 | 2.701e-05 |
long | 0.00425 | 0.01149 | 0.01577 | 0.0002045 |
crazy | 0.01231 | 2.12605 | 0.26796 | 0.0010257 |
Run Tests and Lint
python -m unittest && python -m pylint sqlglot/ tests/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.