chDB is an in-process SQL OLAP Engine powered by ClickHouse
Project description
chDB
chDB is an in-process SQL OLAP Engine powered by ClickHouse [^1] For more details: The birth of chDB
Features
- In-process SQL OLAP Engine, powered by ClickHouse
- No need to install ClickHouse
- Minimized data copy from C++ to Python with python memoryview
- Input&Output support Parquet, CSV, JSON, Arrow, ORC and 60+more formats, samples
- Support Python DB API 2.0, example
Arch
Get Started
Get started with chdb using our Installation and Usage Examples
Installation
Currently, chDB supports Python 3.8+ on macOS and Linux (x86_64 and ARM64).
pip install chdb
Usage
Run in command line
python3 -m chdb SQL [OutputFormat]
python3 -m chdb "SELECT 1,'abc'" Pretty
Data Input
The following methods are available to access on-disk and in-memory data formats:
🗂️ Query On File
(Parquet, CSV, JSON, Arrow, ORC and 60+)
You can execute SQL and return desired format data.
import chdb
res = chdb.query('select version()', 'Pretty'); print(res)
Work with Parquet or CSV
# See more data type format in tests/format_output.py
res = chdb.query('select * from file("data.parquet", Parquet)', 'JSON'); print(res)
res = chdb.query('select * from file("data.csv", CSV)', 'CSV'); print(res)
print(f"SQL read {res.rows_read()} rows, {res.bytes_read()} bytes, elapsed {res.elapsed()} seconds")
Pandas dataframe output
# See more in https://clickhouse.com/docs/en/interfaces/formats
chdb.query('select * from file("data.parquet", Parquet)', 'Dataframe')
🗂️ Query On Table
(Pandas DataFrame, Parquet file/bytes, Arrow bytes)
Query On Pandas DataFrame
import chdb.dataframe as cdf
import pandas as pd
# Join 2 DataFrames
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': ["one", "two", "three"]})
df2 = pd.DataFrame({'c': [1, 2, 3], 'd': ["①", "②", "③"]})
ret_tbl = cdf.query(sql="select * from __tbl1__ t1 join __tbl2__ t2 on t1.a = t2.c",
tbl1=df1, tbl2=df2)
print(ret_tbl)
# Query on the DataFrame Table
print(ret_tbl.query('select b, sum(a) from __table__ group by b'))
🗂️ Query with Stateful Session
from chdb import session as chs
## Create DB, Table, View in temp session, auto cleanup when session is deleted.
sess = chs.Session()
sess.query("CREATE DATABASE IF NOT EXISTS db_xxx ENGINE = Atomic")
sess.query("CREATE TABLE IF NOT EXISTS db_xxx.log_table_xxx (x String, y Int) ENGINE = Log;")
sess.query("INSERT INTO db_xxx.log_table_xxx VALUES ('a', 1), ('b', 3), ('c', 2), ('d', 5);")
sess.query(
"CREATE VIEW db_xxx.view_xxx AS SELECT * FROM db_xxx.log_table_xxx LIMIT 4;"
)
print("Select from view:\n")
print(sess.query("SELECT * FROM db_xxx.view_xxx", "Pretty"))
see also: test_stateful.py.
🗂️ Query with Python DB-API 2.0
import chdb.dbapi as dbapi
print("chdb driver version: {0}".format(dbapi.get_client_info()))
conn1 = dbapi.connect()
cur1 = conn1.cursor()
cur1.execute('select version()')
print("description: ", cur1.description)
print("data: ", cur1.fetchone())
cur1.close()
conn1.close()
🗂️ Query with UDF (User Defined Functions)
from chdb.udf import chdb_udf
from chdb import query
@chdb_udf()
def sum_udf(lhs, rhs):
return int(lhs) + int(rhs)
print(query("select sum_udf(12,22)"))
Some notes on chDB Python UDF(User Defined Function) decorator.
- The function should be stateless. So, only UDFs are supported, not UDAFs(User Defined Aggregation Function).
- Default return type is String. If you want to change the return type, you can pass in the return type as an argument. The return type should be one of the following: https://clickhouse.com/docs/en/sql-reference/data-types
- The function should take in arguments of type String. As the input is TabSeparated, all arguments are strings.
- The function will be called for each line of input. Something like this:
def sum_udf(lhs, rhs): return int(lhs) + int(rhs) for line in sys.stdin: args = line.strip().split('\t') lhs = args[0] rhs = args[1] print(sum_udf(lhs, rhs)) sys.stdout.flush()
- The function should be pure python function. You SHOULD import all python modules used IN THE FUNCTION.
def func_use_json(arg): import json ...
- Python interpertor used is the same as the one used to run the script. Get from
sys.executable
see also: test_udf.py.
🗂️ Python Table Engine
Query on Pandas DataFrame
import chdb
import pandas as pd
df = pd.DataFrame(
{
"a": [1, 2, 3, 4, 5, 6],
"b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
}
)
chdb.query("SELECT b, sum(a) FROM Python(df) GROUP BY b ORDER BY b").show()
Query on Arrow Table
import chdb
import pyarrow as pa
arrow_table = pa.table(
{
"a": [1, 2, 3, 4, 5, 6],
"b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
}
)
chdb.query(
"SELECT b, sum(a) FROM Python(arrow_table) GROUP BY b ORDER BY b", "debug"
).show()
Query on chdb.PyReader class instance
- You must inherit from chdb.PyReader class and implement the
read
method. - The
read
method should:- return a list of lists, the first demension is the column, the second dimension is the row, the columns order should be the same as the first arg
col_names
ofread
. - return an empty list when there is no more data to read.
- be stateful, the cursor should be updated in the
read
method.
- return a list of lists, the first demension is the column, the second dimension is the row, the columns order should be the same as the first arg
- An optional
get_schema
method can be implemented to return the schema of the table. The prototype isdef get_schema(self) -> List[Tuple[str, str]]:
, the return value is a list of tuples, each tuple contains the column name and the column type. The column type should be one of the following: https://clickhouse.com/docs/en/sql-reference/data-types
import chdb
class myReader(chdb.PyReader):
def __init__(self, data):
self.data = data
self.cursor = 0
super().__init__(data)
def read(self, col_names, count):
print("Python func read", col_names, count, self.cursor)
if self.cursor >= len(self.data["a"]):
return []
block = [self.data[col] for col in col_names]
self.cursor += len(block[0])
return block
reader = myReader(
{
"a": [1, 2, 3, 4, 5, 6],
"b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
}
)
chdb.query(
"SELECT b, sum(a) FROM Python(reader) GROUP BY b ORDER BY b"
).show()
see also: test_query_py.py.
Limitations
- Column types supported: pandas.Series, pyarrow.array, chdb.PyReader
- Data types supported: Int, UInt, Float, String, Date, DateTime, Decimal
- Python Object type will be converted to String
- Pandas DataFrame performance is all of the best, Arrow Table is better than PyReader
For more examples, see examples and tests.
Demos and Examples
- Project Documentation and Usage Examples
- Colab Notebooks and other Script Examples
Benchmark
Documentation
- For chdb specific examples and documentation refer to chDB docs
- For SQL syntax, please refer to ClickHouse SQL Reference
Events
- Demo chDB at ClickHouse v23.7 livehouse! and Slides
Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated. There are something you can help:
- Help test and report bugs
- Help improve documentation
- Help improve code quality and performance
Bindings
We welcome bindings for other languages, please refer to bindings for more details.
License
Apache 2.0, see LICENSE for more information.
Acknowledgments
chDB is mainly based on ClickHouse [^1] for trade mark and other reasons, I named it chDB.
Contact
- Discord: https://discord.gg/D2Daa2fM5K
- Email: auxten@clickhouse.com
- Twitter: @chdb
[^1]: ClickHouse® is a trademark of ClickHouse Inc. All trademarks, service marks, and logos mentioned or depicted are the property of their respective owners. The use of any third-party trademarks, brand names, product names, and company names does not imply endorsement, affiliation, or association with the respective owners.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for chdb-2.1.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2ec72b2966fc03b30410a88f23ec8254994c4a7e64d62f9559d371e6a9f3888f |
|
MD5 | d8546fca01350d2161e44a039b160f2f |
|
BLAKE2b-256 | 4dae73cb5dcf08295d0c4de20e2fa34d14d9b3b7132922fc13ab945944b75206 |
Hashes for chdb-2.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 090f4ce39b006a9e15cd23b96555d173efdcb7b0856909904c92fd516ad7121a |
|
MD5 | b7eaf047be4b7778df23f6d0550558ea |
|
BLAKE2b-256 | f998fcb3bd8a832b3531187d2d8f46471494f9247a3ae921916a938a36e2cf05 |
Hashes for chdb-2.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3bc19147b082a1535a5ee2551335a3d74952d7f993ead093866adfd86bdf9bd9 |
|
MD5 | 5fafb31993521bbb78992e42e2fe89b8 |
|
BLAKE2b-256 | 6d890db377aaac3440fb1028ccdb42768eba7e18875410efe7de48b55ebaac18 |
Hashes for chdb-2.1.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a2e10b081d32ec6540cd1adce96db4c036ea975cc3169d924d8dcbb5c88edfd6 |
|
MD5 | 72af973f5d1ab5836a832b365b3d015f |
|
BLAKE2b-256 | 19ffca031434358d49d8f619c6ed5837c41efa637d46c8329d54274cc1802368 |
Hashes for chdb-2.1.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ff2a3a0dabbf1635623df403b9aac1a15a8f899b8c6e41d30ea960b00fd8fc93 |
|
MD5 | e016e18102a73daa4df74b9d67e3ae85 |
|
BLAKE2b-256 | c64857107ef794c5a7aa7f910f2399db8a73227d3fecd23b1780add05ae25bac |